skip to main content
10.1145/3477145.3477151acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiconsConference Proceedingsconference-collections
research-article

Neuromorphic Design Using Reward-based STDP Learning on Event-Based Reconfigurable Cluster Architecture

Published:13 October 2021Publication History

ABSTRACT

Neuromorphic computing systems simulate spiking neural networks that are used for research into how biological neural networks function, as well as for applied engineering such as robotics, pattern recognition, and machine learning. In this paper, we present a neuromorphic system based on an asynchronous event-based hardware platform. We represent three algorithms for implementing spiking networks on our asynchronous hardware platform. We also discuss different trade-offs between synchronisation and messaging costs. A reinforcement learning method known as Reward-modulated STDP is presented as an online learning algorithm in the network. We evaluate the system performance in a single box of our designed architecture using 6000 concurrent hardware threads and demonstrate scaling to networks with up to 2 million neurons and 400 million synapses. The performance of our architecture is also compared to existing neuromorphic platforms, showing a 20 times speed-up over the Brian simulator on an x86 machine, and a 16 times speed-up over a 48-chip SpiNNaker node.

References

  1. B. V. Benjamin and et. al. 2014. Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations. Proc. IEEE 102, 5 (May 2014), 699–716. https://doi.org/10.1109/JPROC.2014.2313565Google ScholarGoogle Scholar
  2. Romain Brette and et. al. 2007. Simulation of networks of spiking neurons: A review of tools and strategies. 23, 3 (2007), 349–398, 2007. https://doi.org/10.1007/s10827-007-0038-6Google ScholarGoogle Scholar
  3. Kit Cheung, Simon R. Schultz, and Wayne Luk. 2012. A Large-Scale Spiking Neural Network Accelerator for FPGA Systems. In ICANN. Springer, 113–120.Google ScholarGoogle Scholar
  4. Kit Cheung, Simon R. Schultz, and Wayne Luk. 2016. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors. Frontiers in Neuroscience 9 (2016), 516. https://doi.org/10.3389/fnins.2015.00516Google ScholarGoogle ScholarCross RefCross Ref
  5. Mike Davies and et al.2018. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. 38, 1 (2018), 82–99. https://doi.org/10.1109/MM.2018.112130359 IEEE Micro, 2018.Google ScholarGoogle Scholar
  6. Mazdak Fatahi, Mahyar Shahsavari, Mahmood Ahmadi, Arash Ahmadi, Pierre Boulet, and Philippe Devienne. 2018. Rate-coded DBN: An online strategy for spike-based deep belief networks. Biologically Inspired Cognitive Architectures 24 (April 2018), 59–69. https://doi.org/10.1016/j.bica.2018.04.009Google ScholarGoogle Scholar
  7. S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana. 2014. The SpiNNaker Project. Proc. IEEE 102, 5 (2014), 652–665. https://doi.org/10.1109/JPROC.2014.2304638Google ScholarGoogle ScholarCross RefCross Ref
  8. E. M. Izhikevich. 2003. Simple Model of Spiking Neurons. Trans. Neur. Netw. 14, 6 (Nov. 2003), 1569–1572. https://doi.org/10.1109/TNN.2003.820440Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. 2016. Training Deep Spiking Neural Networks Using Backpropagation. Frontiers in Neuroscience 10 (2016). https://doi.org/10.3389/fnins.2016.00508 Publisher: Frontiers.Google ScholarGoogle Scholar
  10. Robert Legenstein, Dejan Pecevski, and Wolfgang Maass. 2008. A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback. PLOS Computational Biology 4, 10 (10 2008), 1–27. https://doi.org/10.1371/journal.pcbi.1000180Google ScholarGoogle Scholar
  11. De Ma, Juncheng Shen, Zonghua Gu, Ming Zhang, Xiaolei Zhu, Xiaoqiang Xu, Qi Xu, Yangjing Shen, and Gang Pan. 2017. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. 77 (2017), 43 – 51, 2017. https://doi.org/10.1016/j.sysarc.2017.01.003Google ScholarGoogle Scholar
  12. Paul A. Merolla and et al.2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 6197 (Aug. 2014), 668–673. https://doi.org/10.1126/science.1254642Google ScholarGoogle ScholarCross RefCross Ref
  13. Milad Mozafari, Saeed Reza Kheradpisheh, Timothée Masquelier, Abbas Nowzari-Dalini, and Mohammad Ganjtabesh. 2018. First-Spike-Based Visual Categorization Using Reward-Modulated STDP. IEEE Transactions on Neural Networks and Learning Systems 29, 12(2018), 6178–6190. https://doi.org/10.1109/TNNLS.2018.2826721Google ScholarGoogle ScholarCross RefCross Ref
  14. M. Naylor, S. W. Moore, and D. Thomas. 2019. Tinsel: A Manythread Overlay for FPGA Clusters. In 29th FPL. 375–383. https://doi.org/10.1109/FPL.2019.00066Google ScholarGoogle Scholar
  15. Michael Pfeiffer and Thomas Pfeil. 2018. Deep Learning With Spiking Neurons: Opportunities and Challenges. Frontiers Neurosci 12(2018), 774. https://doi.org/10.3389/fnins.2018.00774Google ScholarGoogle ScholarCross RefCross Ref
  16. Alexander Rast, Mahyar Shahsavari, Graeme M. Bragg, Mark L. Vousden, David Thomas, and Andrew Brown. 2020. A Hardware/Application Overlay Model for Large-Scale Neuromorphic Simulation. In 2020 International Joint Conference on Neural Networks (IJCNN). 1–9. https://doi.org/10.1109/IJCNN48605.2020.9206708Google ScholarGoogle ScholarCross RefCross Ref
  17. Johannes Schemmel, Daniel Briiderle, Andreas Griibl, Matthias Hock, Karlheinz Meier, and Sebastian Millner. 2010. A wafer-scale neuromorphic hardware system for large-scale neural modeling. IEEE, 1947–1950. https://doi.org/10.1109/ISCAS.2010.5536970Google ScholarGoogle Scholar
  18. Mahyar Shahsavari, Jonathan Beaumont, David Thomas, and Andrew Brown. 2021. POETS: A Parallel Cluster Architecture for Spiking Neural Network. IJMLC 11, 4 (2021), 281–285. https://doi.org/10.18178/ijmlc.2021.11.4.1048Google ScholarGoogle ScholarCross RefCross Ref
  19. Mahyar Shahsavari and Pierre Boulet. 2018. Parameter Exploration to Improve Performance of Memristor-Based Neuromorphic Architectures. IEEE Transactions on Multi-Scale Computing Systems 4, 4 (2018), 833–846. https://doi.org/10.1109/TMSCS.2017.2761231Google ScholarGoogle ScholarCross RefCross Ref
  20. M. Shahsavari, P. Devienne, and P. Boulet. 2015. N2S3, a Simulator for the Architecture Exploration of Neuromorphic Accelerators. In 2nd International Workshop on Neuromorphic and Brain-Based Computing Systems (NeuComp 2015) in DATE Conference, Grenoble, France.Google ScholarGoogle Scholar
  21. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press. http://www.cs.ualberta.ca/~sutton/book/the-book.htmlGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  22. Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier, and Anthony Maida. 2019. Deep learning in spiking neural networks. Neural Networks 111(2019), 47 – 63. https://doi.org/10.1016/j.neunet.2018.12.002Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. M. Walravens, E. Verreyken, and J. Steckel. 2020. Spiking Neural Network Implementation on FPGA for Robotic Behaviour. Lecture Notes in Networks and Systems 96 (2020), 694–703. https://doi.org/10.1007/978-3-030-33509-0_65Google ScholarGoogle ScholarCross RefCross Ref
  24. Timo Wunderlich, Akos F. Kungl, Eric Müller, Andreas Hartel, Yannik Stradmann, Syed Ahmed Aamir, Andreas Grübl, Arthur Heimbrecht, Korbinian Schreiber, David Stöckel, Christian Pehle, Sebastian Billaudelle, Gerd Kiene, Christian Mauch, Johannes Schemmel, Karlheinz Meier, and Mihai A. Petrovici. 2019. Demonstrating Advantages of Neuromorphic Computation: A Pilot Study. Frontiers in Neuroscience 13 (March 2019). https://doi.org/10.3389/fnins.2019.00260Google ScholarGoogle Scholar
  25. Shijie Zhou, Rajgopal Kannan, Hanqing Zeng, and Viktor K. Prasanna. 2018. An FPGA Framework for Edge-Centric Graph Processing. In Proceedings of the 15th ACM ICCF(Ischia, Italy). NY, USA, 69–77. https://doi.org/10.1145/3203217.3203233Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. Neuromorphic Design Using Reward-based STDP Learning on Event-Based Reconfigurable Cluster Architecture

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICONS 2021: International Conference on Neuromorphic Systems 2021
      July 2021
      198 pages
      ISBN:9781450386913
      DOI:10.1145/3477145

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 October 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate13of22submissions,59%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format