Elsevier

Applied Soft Computing

Volume 8, Issue 2, March 2008, Pages 906-918
Applied Soft Computing

Opposition versus randomness in soft computing techniques

https://doi.org/10.1016/j.asoc.2007.07.010Get rights and content

Abstract

For many soft computing methods, we need to generate random numbers to use either as initial estimates or during the learning and search process. Recently, results for evolutionary algorithms, reinforcement learning and neural networks have been reported which indicate that the simultaneous consideration of randomness and opposition is more advantageous than pure randomness. This new scheme, called opposition-based learning, has the apparent effect of accelerating soft computing algorithms. This paper mathematically and also experimentally proves this advantage and, as an application, applies that to accelerate differential evolution (DE). By taking advantage of random numbers and their opposites, the optimization, search or learning process in many soft computing techniques can be accelerated when there is no a priori knowledge about the solution. The mathematical proofs and the results of conducted experiments confirm each other.

Introduction

The footprints of the opposition concept can be observed in many areas around us. This concept has sometimes been labeled by different names. Opposite particles in physics, antonyms in languages, complement of an event in probability, antithetic variables in simulation, opposite proverbs in culture, absolute or relative complement in set theory, subject and object in philosophy of science, good and evil in animism, opposition parties in politics, theses and antitheses in dialectic, opposition day in parliaments, and dualism in religions and philosophies are just some examples to mention. Table 1 contains more instances and corresponding details.

It seems that without using the opposition concept, the explanation of different entities around us is hard and maybe even impossible. In order to explain an entity or a situation we sometimes explain its opposite instead. In fact, opposition often manifests itself in a balance between completely different entities. For instance, the east, west, south, and north can not be defined alone. The same is valid for cold and hot and many other examples. Extreme opposites constitute our upper and lower boundaries. Imagination of the infinity is vague, but when we consider the limited, it then becomes more imaginable because its opposite is definable.

Many machine intelligence or soft computing algorithms are inspired by different natural systems. Genetic algorithms, neural nets, reinforcement agents, and ant colonies are, to mention some examples, well established methodologies motivated by evolution, human nervous system, psychology and animal intelligence, respectively. The learning in natural contexts such as these is generally sluggish. Genetic changes, for instance, take generations to introduce a new direction in the biological development. Behavior adjustment based on evaluative feedback, such as reward and punishment, needs prolonged learning time as well.

In many cases the learning begins at a random point. We, so to speak, begin from scratch and move toward an existing solution. The weights of a neural network are initialized randomly, the population initialization in evolutionary algorithms (e.g., GA, DE, and PSO) is performed randomly, and the action policy of reinforcement agents is initially based on randomness, to mention some examples.

Generally, we deal with complex control problems [1], [2]. The random guess, if not far away from the optimal solution, can result in a fast convergence. However, it is natural to state that if we begin with a random guess, which is very far away from the existing solution, let say in worst case it is in the opposite location, then the approximation, search or optimization will take considerably more time, or in worst case becomes intractable. Of course, in absence of any a priori knowledge, it is not possible to make the best initial guess. Logically, we should be looking in all directions simultaneously, or more efficiently, in the opposite direction.

The scheme of opposition-based learning (OBL) was introduced by H.R. Tizhoosh [3]. In a very short time, it was applied to enhance reinforcement learning [4], [5], [6], [8], [18], [34], differential evolution [10], [11], [12], [13], [14], backpropagation neural networks [15], [17], simulated annealing [9], ant colony optimization [16], and window memorization for morphological algorithms [7]. Although, the achieved empirical results in the mentioned papers confirm that the concept of opposition-based learning is general enough and can be utilized in a wide range of learning and optimization fields to make these algorithms faster. We are still faced with this fundamental question: Why an opposite number is beneficial compared to an independent random number? In other words, why a second random number should not be used instead of an opposite number? Many optimization methods need to generate random numbers for initialization or using them during the search process, such as differential evolution (DE) [36], genetic algorithms (GAs) [20], ant colony optimization (ACO) [21], particle swarm optimization (PSO) [22], simulated annealing (SA) [23], and random search (RS) [24].

In this paper, we will prove that, in terms of convergence speed, utilizing random numbers and their opposites is more beneficial than using the pure randomness to generate initial estimates in absence of a priori knowledge about the solution. In fact, we mathematically and empirically show why opposite numbers have higher chance to be a fitter estimate compared to additional independent random guesses.

Finally, a population-based algorithm, differential evolution (DE), is selected to be accelerated using the opposition concept. The main reason for selecting DE is its high ability to find precise solutions for the mixed-typed black-box global optimization problems [25]. The population-based algorithms are computationally expensive and hence their acceleration is widely appreciated. Among the various evolutionary algorithms [26], [27], [28], differential evolution (DE) is well known for its effectiveness and robustness. Frequently reported studies demonstrate that the DE outperforms many other optimizers over both benchmark functions and real-world applications.

The rest of this paper is organized as follows: Section 2 covers the definitions, theorems, and proofs corresponding to opposition-based learning. Empirical verification of the mathematical results are given in Section 3. Employing of OBL to accelerate differential evolution is investigated in Section 4. Finally, the paper is concluded in Section 5.

Section snippets

Opposition-based learning (OBL): Definitions, theorems, and proofs

Definition 1

Let x be a real number in an interval [a,b](x[a,b]); the opposite of x, denoted by x˘, is defined byx˘=a+bx.

Fig. 1 (top) illustrates x and its opposite x˘ in interval [a,b]. As seen, x and x˘ are located in equal distance from the interval’s center (|(a+b)/2x|=|x˘(a+b)/2|) and the interval’s boundaries (|xa|=|bx˘|) as well.

This definition can be extended to higher dimensions by applying the same formula to each dimension [3].

Definition 2

Let P(x1,x2,,xD) be a point in D-dimensional space, where x1,x2,

Empirical verification of mathematical results

In this section, the aforementioned mathematical proofs are experimentally verified and the usefulness of the opposite numbers in higher dimensional spaces is investigated. For this propose, three random points in a D-dimensional space are generated (n times), called X, Xs, and Xr. Then, the number of times (out of n) which X, X˘, or Xr is the closest to the randomly generated solution Xs (measured by Euclidean distance) is counted and finally the probability of the closeness of each point is

Employing OBL to accelerate differential evolution

Differential evolution (DE) was proposed by Price and Storn in 1995 [29], [31]. It is an effective, robust, and simple global optimization algorithm [36] which has only a few control parameters. According to frequently reported comprehensive studies [19], [32], [33], [36], DE outperforms many other optimization methods in terms of convergence speed and robustness over common benchmark functions and real-world problems. Generally speaking, all population-based optimization algorithms, no

Conclusion

For many soft computing techniques, in absence of a-priori information about the solution, pure random guess is usually the only option to generate candidate solutions. Obviously the computation time, among others, is directly related to the distance of the guess from the optimal solution.

Experimental results, recently published, indicate that employing opposition-based learning within existing soft computing algorithms can accelerate the learn and search process. Promising results have been

References (41)

  • M. Shokri et al.

    Opposition-based Q (λ) algorithm

  • F. Khalvati, H.R. Tizhoosh, M.D. Aagaard, Opposition-based window memorization for morphological algorithms, in: IEEE...
  • H.R. Tizhoosh, M. Shokri, M.S. Kamel, Opposition-based Q (lambda) with non-Markovian update, in: IEEE Symposium on...
  • M. Ventresca, H, Tizhoosh, Simulated annealing with opposite neighbors, in: IEEE Symposium on Foundations of...
  • S. Rahnamayan et al.

    Opposition-based differential evolution algorithms

  • S. Rahnamayan et al.

    Opposition-based differential evolution for optimization of noisy problems

  • S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential evolution (ODE), J. IEEE Trans. Evol....
  • S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential evolution (ODE) with variable jumping rate,...
  • M. Ventresca et al.

    Improving the convergence of backpropagation by opposite transfer functions

  • H.R. Tizhoosh, A.R. Malisia, Applying opposition-based ideas to the ant colony system, in: IEEE Symposium on...
  • Cited by (308)

    • MNEARO: A meta swarm intelligence optimization algorithm for engineering applications

      2024, Computer Methods in Applied Mechanics and Engineering
    View all citing articles on Scopus
    View full text