- ANAND, R., MEHROTTRA, K. G., MOHAN, C. K., AND Rn_NKA, S. 1993. An improved algorithm for neural network classification of imbalanced training sets. IEEE Trans. Neural Netw. 4.Google Scholar
- AHMAD, M. AND S~, F. M.A. 1992a. }Dynamic learning using exponential energy function. In Proceedings of the International Joint Conference on Neural Networks.Google Scholar
- AHMAD, M. AND SALAM, F. M. A. 1992b. Error back-propagation learning using the polynomial energy function. In Proceedings of the fourth International Conference on System Engineering.Google Scholar
- AHMAD, M. AND SALAM, F. M. A. 1992c. Supervised learning using the Cauchy energy function. In Proceedings of the International Conference on Fuzzy Logic and Neural Networks.Google Scholar
- BATTITI, R. 1992. First- and second-order methods for learning: Between steepest descent and Newton's method. Neural Comput. 4. Google Scholar
- BATTITI, R. 1989. Accelerated backpropagation learning: Two optimization methods. Complex Syst. 3.Google Scholar
- BRYSON, A. E. AND Ho, Y.C. 1969. Applied Optimal Control. Waltham MA.Google Scholar
- DEVOS, M. R. AND ORBAN, G.A. 1988. Self learning backpropagation. In Proceedings of the NeuroNimes.Google Scholar
- EATON, H. A. C. AND OLIVIER, T.L. 1992. Learning coefficient dependence on training set size. Neural Netw. 5. Google Scholar
- FAHLMAN, S. E. 1988. An empirical study of learning speed in back-propagation networks. Tech. Rep. CMU-CS-88-162, Carnegie Mellon University, Pittsburgh, PA.Google Scholar
- HINTON, G.E. 1989. Connectionist learning procedures. Artif. InteU. 40. Google Scholar
- HINTON, G.E. 1987. Learning translation invariant recognition in massively parallel networks. In Lecture Notes in Computer Science. Springer-Verlag. Google Scholar
- HOLT, M. J. J. AND SEMNANI, S. 1990. Convergence of back-propagation in neural networks using a log-likelihood cost function. Electron. Lett. 26.Google Scholar
- JACOBS, R. A. 1988. Increased rate of convergence through learning rate adaptation. Neural Netw. 1.Google Scholar
- KINSELLA, J. A. 1992. Comparison and evaluation of variants of the conjugate gradient method for learning in feed-forward neural networks with backward error propagation. Network 3.Google Scholar
- LE CUN, Y. 1988. A theoretical framework for backpropagation. In Proceedings of the 1988 Connectionist Models Summer School.Google Scholar
- LE CUN, Y. 1986. Learning process in an asymmetric threshold network. In Disordered Systems and B~ological Organization, Springer- Verlag.Google Scholar
- LIPPMANN, R. P. 1987. An introduction to computing with neural network. IEEE ASSP Mag. (April), 4-22.Google Scholar
- MINAI, A. A. AND WILLIAMS, R.D. 1990a. Acceleration of back-propagation through learning rate and momentum adaptation, in Proceedings of the International Joint Conference on Neural Networks (Washington, DC).Google Scholar
- MINAI, A. A. AND WILLIAMS, R. D. 1990b. Backpropagation heuristics: A study of the extended delta-bar-delta algorithm. In Proceedings of the International Joint Conference on Neural Networks (San Diego, CA).Google Scholar
- PARKER, D. B. 1985. Learning logic. Tech. Rep. MIT, Cambridge, MA.Google Scholar
- PIREZ, Y. M. AND SARKAR, D. 1991. Back-propagation with controlled oscillation of weights. Tech. Rep. TR-CS-01-91, University of Miami, FL.Google Scholar
- RIGLER, A. K., IRVINE, J. M., AND VOGL, T.P. 1991. Rescahng of variables in back propagation learning. Neural Netw. 4. Google Scholar
- RUMELHART, D. H., HINTON, G. E., AND WILLIAMS, R.J. 1986. Learning internal representation by error propagation. In Parallel Distrtbuted Processing: Explorations ~n the Microstructures of Cognition, vol. 1. MIT Press, Cambridge, MA. Google Scholar
- RUMELHART, D. E., MCCLELLAND, J. L., ST AL. 1986. Parallel Distributed Processing: Explorations in the Mtcrostructures of Cognition, vol. 1. MIT Press, Cambridge, MA. Google Scholar
- SAMAD, T.1991. Back propagation with expected source values. Neural Netw. 4. Google Scholar
- S~NNO, D. F. 1978. Conjugate gradient methods with inexact searches. Math. Oper. Res. 3.Google Scholar
- SIETSMA, J. AND DOW, R.J. 1991. Creating artificial neural networks that generalize. Neural Netw. 4. Google Scholar
- SOLLA, S. A., LEVIN, E., AND FLEISHER, M. 1988. Accelerated learning in layered neural networks. Complex Syst. 2. Google Scholar
- TESAURO, G. 1987. Scaling relationships in back-propagation learning: Dependence on training set size. Complex Syst. 1. Google Scholar
- TESAURO, G. AND JANSSENS, B. 1988. Scaling relationships in back-propagation learning. Complex Syst. 2. Google Scholar
- TOLLENAERE, T. 1990. SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Netw. 3. Google Scholar
- vAN OOYEN, A. AND NIENHUIS, B. 1992. Improving the convergence of the back-propagation algorithm. Neural Netw. 5. Google Scholar
- WATROUS, R. 1988. Learning algorithms for connectionist networks: Applied gradient methods for nonlinear optimization. Tech. Rep. MS-SIS- 88-62, Univ. of Pennsylvania.Google Scholar
- WEIR, M.K. 1991. A method for self-determination of adaptive learning rates in back propagation. Neural Netw. 4. Google Scholar
- WERBOS, P.J. 1974. Beyond regresszon: New tool for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard Univ.Google Scholar
- WERBOS, P. J. 1990. Backpropagation through time: What it does and how to do it. In Proceedings of the IEEE 78.Google Scholar
- WIDROW, B. AND LEHR, M.A. 1990. 30 years of adaptive neural networks: Perceptron, madline, and backpropagation. In Proceedings of the IEEE 78.Google Scholar
Index Terms
- Methods to speed up error back-propagation learning algorithm
Recommendations
Influence of Learning Rate and Neuron Number on Prediction of Animal Phenotype Value Using Back-Propagation Artificial Neural Network
ISCID '09: Proceedings of the 2009 Second International Symposium on Computational Intelligence and Design - Volume 02In the past, a prediction equation based on the Single Nucleotide Polymorphisms (SNP) is derived to calculate genomic breeding values (GEBV). However, the genome is very complex; a function could not reflect the relation between markers and phenotypes. ...
The Improved Training Algorithm of Back Propagation Neural Network with Self-adaptive Learning Rate
CINC '09: Proceedings of the 2009 International Conference on Computational Intelligence and Natural Computing - Volume 01This paper addresses the questions of improving convergence performance for back propagation (BP) neural network. For traditional BP neural network algorithm, the learning rate selection is depended on experience and trial. In this paper, based on ...
Boundedness and Convergence of Split-Complex Back-Propagation Algorithm with Momentum and Penalty
This paper investigates the split-complex back-propagation algorithm with momentum and penalty for training complex-valued neural networks. Here the momentum are used to accelerate the convergence of the algorithm and the penalty are used to control the ...
Comments