Skip to main content

Design of a 1st Generation Neurocomputer

  • Chapter
VLSI Design of Neural Networks

Abstract

The analysis of today’s neural paradigms brings to light a set of elementary compute-intensive algorithmic strings which are shared by all neural models and, thus, make sense to be implemented in hardware. 2-D arrays composed of a specific VLSI Neural Signal Processor MA 16 that integrates these elementary strings as hard-wired functional blocks present a favourable solution to the architectural problem of mapping neural parallelity and adaptivity into silicon. The proposed neurocomputer concept is sizeable independently to the applicational domain in terms of processing power, memory size and flexibility, and is designed for throughputs that enable the user to access real-world applications in reasonable time. At the chip site, throughput rates of the order of 500 MC/sec (1 Connection = 16 bit) are achievable with 1μm CMOS technology. 2-dimensional systolic arrays composed of 16x16 MA16 chips will allow for processing of 128 GC/sec.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. DARPA Neural Network Study, pp.34 (figure 2.14-15), AFCEA International Press, Nov. 1988

    Google Scholar 

  2. ibid.pp.330 (figure 28.5)

    Google Scholar 

  3. H.P. Graf, “A Reconfigurable CMOS Neural Network”, Digest of Technical Papers of the Int. Solid State Circuits Conf., vol.33, pp.144, San Francisco, Febr. 1990

    Google Scholar 

  4. L. Curran, “Wafer Scale Integration Arrives In ‘Disk’ Form”, in: Electronic Design, pp. 51–54, Oct 26, 1989

    Google Scholar 

  5. M. Holler et al., “An Electrically Trainable Artificial Neural Network (ETANN) with 10240 Floating Gate Synapses”, Proceedings of the IJCNN-89, pp. 11–191, Washington DC, June 1989

    Google Scholar 

  6. U. Ramacher, B. Schürmann, “Unified Description Of Neural Algorithms For Time-independent Pattern Recognition”, 13th chapter of this book

    Google Scholar 

  7. U. Ramacher, “Hardware Concepts Of Neural Networks”, in: ADVANCED NEUROCOMPUTERS, pp.209–218, edited by R. Eckmiller, Elsevier 1990

    Google Scholar 

  8. U. Ramacher, J. Beichter, “Systolic Architectures For Fast Emulation Of Artificial Neural Networks”, in: Proceedings of the Int. Conf. on Systolic Arrays, pp. 277–286, Killarney, Ireland, Prentice Hall 1989

    Google Scholar 

  9. “Microelectronics For Artificial Neural Nets”, pp. 46-60, editors H. Klar, U. Ramacher, VDI-Verlag, Düsseldorf, 1989

    Google Scholar 

  10. U. Ramacher, M. Wesseling, “WSI Architecture Of A Neurocomputer Module”, Proc. of the IEEE Int. Conf. On Wafer Scale Integration, pp. 125–130, San Francisco, Jan. 1990

    Google Scholar 

  11. U. Totzeck, F. Matthiesen, S. Wohlleben, T.G. Noll, “CMOS VLSI Implementation Of The 2D-DCT With Linear Processor Arrays”, pp. 937–940, Proc. of IEEE ICASSP-90

    Google Scholar 

  12. J. Beichter, N. Bruls, H. Klar, U. Ramacher, “VLSI Design Of A Neural Signal Processor”, to appear in: Proc. IFIP Workshop on Silicon Architectures For Neural Nets, St. Paul de Vence, Nov. 1990

    Google Scholar 

  13. H. Kato et al., “A Parallel Neurocomputer Architecture Towards Billion Connection Updates Per Second”, vol.2, p. 47–50, Int. Joint Conf. on Neural Networks, Washington D.C., January 1990

    Google Scholar 

  14. N. Morgan et al., “The RAP: A Ring Array Processor For Layered Network Calculations”, to appear in: Proc. Int. Conf. on Application Specific Array Processors, Princeton, Sept. 1990

    Google Scholar 

  15. S. Y. Kung, J. N. Hwang, “Parallel Architectures for Artificial Neural Nets”, vol.2, IEEE Int. Conf. on Neural Networks, San Diego, July 1988

    Google Scholar 

  16. U. Ramacher, “Systolic Architectures for Fast Emulation of Artificial Neural Networks”, poster paper, presented at IEEE Int. Conf. on Neural Networks, San Diego, July 1988

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Ramacher, U. et al. (1991). Design of a 1st Generation Neurocomputer. In: Ramacher, U., Rückert, U. (eds) VLSI Design of Neural Networks. The Springer International Series in Engineering and Computer Science, vol 122. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3994-0_14

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-3994-0_14

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-6785-7

  • Online ISBN: 978-1-4615-3994-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics