IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
A Learning Multiple-Valued Logic Network that can Explain Reasoning
Zheng TangOkihiko IshizukaKoichi Tanno
Author information
JOURNAL FREE ACCESS

1999 Volume 119 Issue 8-9 Pages 970-978

Details
Abstract

This paper describes a learning multiple-valued logic (MVL) network that can explain reasoning. The learning MVL network is derived directly from a canonical realization of MVL functions and therefore its functional completeness is guaranteed. We develop traditional back-propagation to the MVL networks and drive a specific algorithm for the MVL networks. The algorithm combines back-propagation learning with other features of MVL networks, including the prior human knowledge on the MVL networks, for example, the architecture, the number of hidden units and layers, and many other useful parameters for the networks. The prior knowledge from the MVL canonical form can be used as initial parameters of the learning MVL network in its learning process. As a result, the prior knowledge can guide the back-propagation learning process to get started from a point in the parameter space that is not far from the optimal one, thus, back-propagation can fine-tune the prior knowledge for achieving a desired output easily. This cooperative relation between the prior knowledge and the back-propagation learning process is not always present in neural networks. The learning process in the MVL network also shares some cytology behaviors, in particular the cell adhesion, the cell aptopsis (the death of cell), and the cluster cell aptopsis (the death of cluster cells), and presents these properties in the artificial MVL network successfully. Simulation results are also given to confirm the effectiveness of the methods.

Content from these authors
© The Institute of Electrical Engineers of Japan
Previous article Next article
feedback
Top