We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Skip to main content
Log in

A learning algorithm to teach spatiotemporal patterns to recurrent neural networks

  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract

A new supervised learning algorithm is proposed. It teaches spatiotemporal patterns to the recurrent neural network with arbitrary feedback connections. In this method the network with fixed connection weights is run for a given period of time under a given external input and initial condition. Then the weights are changed so that the total error from the time dependent teacher signal in this period is maximally decreased. This algorithm is equivalent to the back propagation method for the recurrent network if the discrete time prescription is adopted. However, continuous time formalism seems suited for temporal processing application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Almeida LB (1987) A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In: Butler CM (eds) Proceedings of the 1st International Conference on Neural Networks, June 1987, San Diego, pp II609–II618

  • Cohen MA, Grossberg S (1983) Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans SMC-13:815–825

    Google Scholar 

  • Hinton GE, Rumelhart DE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing I. MIT Press, Cambridge, pp 318–362

    Google Scholar 

  • Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79:2554–2558

    PubMed  Google Scholar 

  • Hopfield JJ (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proc Natl Acad Sci USA 81:3088–3092

    PubMed  Google Scholar 

  • Ikeda T, Inui T, Kawato M, Miyake S, Suzuki R, Yodogawa E (1988) Energy learning in neural network model which reconstructs image from noisy data. ITEJ Tech Rep 12:31–36

    Google Scholar 

  • Lehky S, Sejnowski T (1988) Network model of shape-fromshading: neural function arises from both receptive and projective fields. Nature (London) 333:452–454

    Article  Google Scholar 

  • Nowlan SJ (1988) Gain variation in recurrent error propagation networks. Complex Syst 2:305–320

    Google Scholar 

  • Pearlmutter BA (1989) Learning state space trajectories in recurrent neural network. Neural Comput 1:263–269

    Google Scholar 

  • Pineda FJ (1987) Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 59:2229–2232

    Article  PubMed  Google Scholar 

  • Rosenberg CR, Sejnowski T (1986) NET talk, a parallel network that learns to read aloud. Tech Rep, Johns Hopkis University, JHU/EECS-86/01, pp 1–23

  • Sato M (1989) A real time learning algorithm for recurrent analog neural networks. TIRL Preprint

  • Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comput 1:270–280

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sato, Ma. A learning algorithm to teach spatiotemporal patterns to recurrent neural networks. Biol. Cybern. 62, 259–263 (1990). https://doi.org/10.1007/BF00198101

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00198101

Keywords

Navigation