Assisted Research of the Dynamic Neural Networks with Time-Delays and Recurrent Links

Article Preview

Abstract:

The paper showed the assisted research of one new model of digital dynamic neural network by using the LabVIEW proper virtual instrumentation and proper mathematical model. In the research were used some different way to optimize the convergence process, for example: using one time- delay of the first and second output from the neural layers; using the recursive link and time- delay; using the bipolar sigmoid hyperbolic tangent sensitive function replacing the sigmoid simple sensitive function. By on-line simulation of the neural network it is possible to know what will be the influences of all network parameters like the input data, weight, biases matrix, sensitive functions, closed loops and time- delay, to the gradient errors, in a convergence process. By on-line using the proper virtual LabVIEW instrumentation, were established some influences of the network parameters: number of input vector data, number of neurons in each layers, to the number of iterations before canceled the mean square error to the target. In the optimization research we used the minimization of the gradient error function between the output and the target.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 463-464)

Pages:

1094-1097

Citation:

Online since:

February 2012

Export:

Price:

[1] Blelloch, G., & Rosenberg, C.R. Network learning on the Connection Machine, Proc. of the Tenth International Joint Conference on Artificial Intelligence. Dunno, pp.323-326, (1987).

Google Scholar

[2] Cybenko, G. Approximation by superpositions of sigmoid function, Mathematics of Control, Signals, and Systems, vol. 2., p.303–314, (1989).

DOI: 10.1007/bf02551274

Google Scholar

[3] Elman, J.L. Finding structure in time, Cognitive Science, 14, pp.179-211, (1990).

Google Scholar

[4] Fukushima, K. Cognitron: A self- organizing multilayered neural network, Biological Cybernetics, 20, pp.121-136, (1975).

DOI: 10.1007/bf00342633

Google Scholar

[4] Hartman, E.J., Keeler, J.D., &Kowalski, J.M. Layered neural network with Gaussian hidden units as universal approximations, Neural Computation, 2(2), pp.210-215, (1990).

DOI: 10.1162/neco.1990.2.2.210

Google Scholar

[5] Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities, Proc. of the National Academy of Sciences, 81, pp.3088-3092, (1984).

Google Scholar

[6] Lippmann, R.P. An introduction to computing with neural nets, IEEE Transactions on Acoustic, Speech, and Signal Processing, 2(4), pp.4-22, (1987).

Google Scholar

[7] Minsky, M., & Papert, S. Perceptron: An Introduction to Computational Geometry, The MIT Press, (1969).

Google Scholar

[8] Dimith, H., Beale M. & Hagan M.: Neural Network ToolboxTM 6- User Guide, The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098, USA.

Google Scholar