Skip to main content

ENZO-M — A hybrid approach for optimizing neural networks by evolution and learning

  • Evolutionary Algorithms for Neural Networks
  • Conference paper
  • First Online:
Parallel Problem Solving from Nature — PPSN III (PPSN 1994)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 866))

Included in the following conference series:

Abstract

ENZO-M combines two successful search techniques using two different timescales: learning (gradient descent) for finetuning of each offspring and evolution for coarse optimization steps of the network topology. Therefore, our evolutionary algorithm is a metaheuristic based on the best available local heuristic. Through training each offspring by fast gradient methods the search space of our evolutionary algorithm is considerably reduced to the set of local optima.

Using the parental weights for initializing the weights of each offspring both the gradient descent (learning) is speeded up by 1–2 orders of magnitude and the expected value of the local minimum (fitness of the trained offspring) is far above the mean value for randomly initialized offsprings. Thus, ENZO-M takes full advantage of both the knowledge transfer from the parental genstring using the evolutionary search and the efficiently computable gradient information using finetuning.

By the cooperation of the discrete mutation operator and the continuous weight decay method ENZO-M impressively thins out the topology of feedforward neural networks. Especially, ENZO-M also tries to cut off the connections to possibly redundant input units. Therefore ENZO-M not only supports the user in the network design but also recognizes redundant input units.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H. BraunMassiv parallele Algorithmen für kombinatorische Optimierungsprobleme und ihre Implementierung auf einem Parallelrechner, Diss., TH Karlsruhe, 1990

    Google Scholar 

  2. H. BraunOn solving traveling salesman problems by genetic algorithms, in: Parallel Problem Solving from Nature, LNCS 496 (Berlin, 1991)

    Google Scholar 

  3. H. Braun, J. Feulner, V. UllrichLearning strategies for solving the problem of planning using backpropagation, in: Proc. 4th ICNN, Nimes, 1991

    Google Scholar 

  4. H. Braun and J. WeisbrodEvolving neural feedforward networks, in: Proc. Int. Conf. Artificial Neural Nets and Genetic Algorithms, R. F. Albrecht, C. R. Reeves and N.C. Steele, eds. Springer, Wien, 1993

    Google Scholar 

  5. H. Braun and J. WeisbrodEvolving neural networks for application oriented problems, in: Proc. of the second annual conference on evolutionary programming, Evolutionary programming Society, San Diego, 1993, S.62–71

    Google Scholar 

  6. M. Riedmiller and H. BraunA direct adaptive method for faster backpropagation learning: The RPROP algorithm, in: Proc. of the ICNN 93, San Francisco, 1993

    Google Scholar 

  7. D.B. FogelSystem Identifikation through simulated evolution; a machine learning approach to modelling, Needham Heights, Ginn Press, 1991

    Google Scholar 

  8. L. J. Fogel, A.J. Owens and M.J. WalshArtificial intelligence through simulated evolution, John Wiley, NY., 1962

    Google Scholar 

  9. J. H. HollandAdaptation in natural and artificial systems, The University of Michigan Press, Ann Arbor, 1975

    Google Scholar 

  10. John R. Mc Donnell and Don WaagenNeural Network Structure Design by Evolutionary Programming, Proc. of the Second Annual Conf. on Evolutionary Programming, San Diego, 1993

    Google Scholar 

  11. I. RechenbergEvolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Frommann-Holzboog Verlag, Stuttgart, 1973

    Google Scholar 

  12. D. E. Rumelhart and J. McClellandParallel Distributed Processing, 1986

    Google Scholar 

  13. W. Schiffmann, M. Joost, R. WernerApplication of genetic algorithms to the construction of topologies for multilayer perceptrons, in: Proc. of the int. conf. Artificial Neural Nets and Genetic Algorithms, R. F. Albrecht, C. R. Reeves and N.C. Steele, eds. Springer, Wien, 1993

    Google Scholar 

  14. W. Schiffmann, M. Joost, R. WernerOptimization of the backpropagation algorithm for training multilayer perceptrons, Techn. rep., University of Koblenz, 1993

    Google Scholar 

  15. H. P. SchwefelNumerische Optimierung von Computermodellen mittels der Evolutionsstrategie, in: Interdisciplinary research (vol. 26), Birkhäuser, Basel, 1977

    Google Scholar 

  16. H. P. SchwefelEvolutionsstrategie und Numerische Optimierung, Diss., TU Berlin, 1975

    Google Scholar 

  17. A. S. Weigend and D. E. Rumelhart and B. A. Huberman Generalisation by Weight-Elimination with Application to Forecasting

    Google Scholar 

  18. P. ZagorskiEntwicklung evolutionärer Algorithmen zur Optimierung der Topologie und des Generalisierungsverhaltens von Multilayer Perceptrons, Diplomarbeit, Universität Karlsruhe, 1993

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Yuval Davidor Hans-Paul Schwefel Reinhard Männer

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Braun, H., Zagorski, P. (1994). ENZO-M — A hybrid approach for optimizing neural networks by evolution and learning. In: Davidor, Y., Schwefel, HP., Männer, R. (eds) Parallel Problem Solving from Nature — PPSN III. PPSN 1994. Lecture Notes in Computer Science, vol 866. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58484-6_287

Download citation

  • DOI: https://doi.org/10.1007/3-540-58484-6_287

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58484-1

  • Online ISBN: 978-3-540-49001-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics