Abstract
A new supervised learning algorithm is proposed. It teaches spatiotemporal patterns to the recurrent neural network with arbitrary feedback connections. In this method the network with fixed connection weights is run for a given period of time under a given external input and initial condition. Then the weights are changed so that the total error from the time dependent teacher signal in this period is maximally decreased. This algorithm is equivalent to the back propagation method for the recurrent network if the discrete time prescription is adopted. However, continuous time formalism seems suited for temporal processing application.
Similar content being viewed by others
References
Almeida LB (1987) A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In: Butler CM (eds) Proceedings of the 1st International Conference on Neural Networks, June 1987, San Diego, pp II609–II618
Cohen MA, Grossberg S (1983) Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans SMC-13:815–825
Hinton GE, Rumelhart DE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing I. MIT Press, Cambridge, pp 318–362
Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79:2554–2558
Hopfield JJ (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proc Natl Acad Sci USA 81:3088–3092
Ikeda T, Inui T, Kawato M, Miyake S, Suzuki R, Yodogawa E (1988) Energy learning in neural network model which reconstructs image from noisy data. ITEJ Tech Rep 12:31–36
Lehky S, Sejnowski T (1988) Network model of shape-fromshading: neural function arises from both receptive and projective fields. Nature (London) 333:452–454
Nowlan SJ (1988) Gain variation in recurrent error propagation networks. Complex Syst 2:305–320
Pearlmutter BA (1989) Learning state space trajectories in recurrent neural network. Neural Comput 1:263–269
Pineda FJ (1987) Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 59:2229–2232
Rosenberg CR, Sejnowski T (1986) NET talk, a parallel network that learns to read aloud. Tech Rep, Johns Hopkis University, JHU/EECS-86/01, pp 1–23
Sato M (1989) A real time learning algorithm for recurrent analog neural networks. TIRL Preprint
Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comput 1:270–280
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Sato, Ma. A learning algorithm to teach spatiotemporal patterns to recurrent neural networks. Biol. Cybern. 62, 259–263 (1990). https://doi.org/10.1007/BF00198101
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/BF00198101