Abstract
Neuromorphic devices capable of training spiking neural networks (SNNs) are not easy to develop due to two main factors: lack of efficient supervised learning algorithms, and high computational requirements that ultimately lead to higher power consumption and higher cost. In this article, we present an FPGA-based neuromorphic system capable of training SNNs efficiently. The Tempotron learning rule along with population coding is adopted for SNN learning to achieve a high level of classification accuracy. To blend cost efficiency with high throughput, integration of both integrate-and-fire (IF) and leaky integrate-and-fire (LIF) neurons is proposed. Moreover, the post-synaptic potential (PSP) kernel function for the LIF neuron is modeled using slopes. This novel solution obviates the need for multipliers and memory accesses for kernel computations. Experimental results show that a speedup of about 15\(\times\) can be obtained on a general-purpose Von-Neumann device if the proposed scheme is adopted. Moreover, the proposed neuromorphic design is fully parallelized and can achieve a maximum throughput of about 2460\(\times\)10\(^6\) 4-input samples per second, while consuming only 13.6 slice registers per synapse and 89.5 look-up tables (LuTs) per synapse on Virtex 6 FPGA. The system can classify an input sample in about 4.88 ns.
Similar content being viewed by others
Data availability
All the datasets used for the evaluation of the proposed scheme are available publicly. The relevant references have already been mentioned in the bibliography.
References
Liu, D., Yu, H., Chai, Y.: Low-power computing with neuromorphic engineering. Adv. Intell. Syst. 3(2), 2000150 (2021)
Merolla, P.A., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)
Lin, C.-K., et al.: Programming spiking neural networks on Intel’s Loihi. Computer 51(3), 52–61 (2018)
Pei, J., et al.: Towards artificial general intelligence with hybrid tianjic chip architecture. Nature 572(7767), 106–111 (2019)
Painkras, E., et al.: Spinnaker: a 1-w 18-core system-on-chip for massively-parallel neural network simulation. IEEE J. Solid-State Circuits 48(8), 1943–1953 (2013). https://doi.org/10.1109/JSSC.2013.2259038
Maass, W., Papadimitriou, C.H., Vempala, S., Legenstein, R.: Brain computation: a computer science perspective, pp. 184–199. Springer, Cham (2019)
McKenzie, A., Branch, D. W., Forsythe, C., James, C.D.: Toward exascale computing through neuromorphic approaches. In: Sandia report SAND2010-6312, Sandia National Laboratories (2010)
Kim, S., Park, S., Na, B., Yoon, S.: Spiking-yolo: spiking neural network for energy-efficient object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, AAAICAI, pp. 11270–11277 (2020). https://doi.org/10.1609/aaai.v34i07.6787
Izhikevich, E.M.: Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 15(5), 1063–1070 (2004)
Wu, Y., Deng, L., Li, G., Zhu, J., Shi, L.: Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018)
Shouval, H.Z., Wang, S.S.-H., Wittenberg, G.M.: Spike timing dependent plasticity: a consequence of more fundamental learning rules. Front. Comput. Neurosci. 4, 19 (2010)
Pfeiffer, M., Pfeil, T.: Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018)
Heidarpour, M., Ahmadi, A., Rashidzadeh, R.: A CORDIC based digital hardware for adaptive exponential integrate and fire neuron. IEEE Trans. Circuits Syst. I: Regul. Pap. 63(11), 1986–1996 (2016)
University of California, I.: UCI machine learning repository. https://archive.ics.uci.edu/ml/datasets.php. Accessed 5 Jan 2023
Deng, L.: The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012)
Heidarpur, M., Ahmadi, A., Ahmadi, M., Azghadi, M.R.: CORDIC-SNN: on-FPGA STDP learning with izhikevich neurons. IEEE Trans. Circuits Syst. I: Regul. Pap. 66(7), 2651–2661 (2019)
Farsa, E.Z., Ahmadi, A., Maleki, M.A., Gholami, M., Rad, H.N.: A low-cost high-speed neuromorphic hardware based on spiking neural network. IEEE Trans. Circuits Syst. II: Express Briefs 66(9), 1582–1586 (2019)
Li, S., et al.: A fast and energy-efficient snn processor with adaptive clock/event-driven computation scheme and online learning. IEEE Trans. Circuits Syst. I: Regul. Pap. 68(4), 1543–1552 (2021). https://doi.org/10.1109/TCSI.2021.3052885
Zhang, G., et al.: A low-cost and high-speed hardware implementation of spiking neural network. Neurocomputing 382, 106–115 (2020)
Chen, Y.-H., Krishna, T., Emer, J.S., Sze, V.: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52(1), 127–138 (2016)
Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vision 113(1), 54–66 (2015)
Qiao, G., et al.: Stbnn: Hardware-friendly spatio-temporal binary neural network with high pattern recognition accuracy. Neurocomputing 409, 351–360 (2020)
Lammie, C., Hamilton, T., Azghadi, M.R.: Unsupervised character recognition with a simplified fpga neuromorphic system, pp. 1–5. IEEE (2018)
Ma, D., et al.: Darwin: a neuromorphic hardware co-processor based on spiking neural networks. J. Syst. Architect. 77, 43–51 (2017)
Neil, D., Liu, S.-C.: Minitaur, an event-driven fpga-based spiking network accelerator. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 22(12), 2621–2628 (2014)
Wu, J., et al.: Efficient design of spiking neural network with stdp learning based on fast cordic. IEEE Trans. Circuits Syst. I: Regul. Pap. 68(6), 2522–2534 (2021)
Diehl., P.U., et al.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: 2015 International Conference on Neural Networks, IJCNN, pp. 1–8 (2015). https://doi.org/10.1109/IJCNN.2015.7280696
Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M.: Theory and tools for the conversion of analog to spiking convolutional neural networks. (2016). arXiv preprint arXiv:1612.04052
Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M., Liu, S.-C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017)
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS, pp. 4114–4122 (2016). http://papers.nips.cc/paper/6573-binarized-neural-networks
Bohte, S.M., Kok, J.N., La Poutre, H.: Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48(1–4), 17–37 (2002)
Sarić, R., Jokić, D., Beganović, N., Pokvić, L.G., Badnjević, A.: Fpga-based real-time epileptic seizure classification using artificial neural network. Biomed. Signal Process. Control 62, 102106 (2020)
Ortega-Zamorano, F., Jerez, J.M., Urda Muñoz, D., Luque-Baena, R.M., Franco, L.: Efficient implementation of the backpropagation algorithm in fpgas and microcontrollers. IEEE Trans. Neural Netw. Learn. Syst. 27(9), 1840–1850 (2016). https://doi.org/10.1109/TNNLS.2015.2460991
Fang, H., Shrestha, A., Zhao, Z., Li, Y., Qiu, Q.: An event-driven neuromorphic system with biologically plausible temporal dynamics. In: 2019 IEEE/ACM International Conference on Computer-Aided Design, ICCAD, pp. 1–8 (2019). https://doi.org/10.1109/ICCAD45719.2019.8942083
Zhao, B., Ding, R., Chen, S., Linares-Barranco, B., Tang, H.: Feedforward categorization on aer motion events using cortex-like features in a spiking neural network. IEEE Trans. Neural Netw. Learn. Syst. 26(9), 1963–1978 (2015). https://doi.org/10.1109/TNNLS.2014.2362542
Jeong, D.S.: Tutorial: neuromorphic spiking neural networks for temporal learning. J. Appl. Phys. 124(15), 152002 (2018)
Chowdhury, S.S., Lee, C., Roy, K.: Towards understanding the effect of leak in spiking neural networks (2020). arXiv preprint arXiv:2006.08761
Afshar, S., et al.: Turn down that noise: synaptic encoding of afferent snr in a single spiking neuron. IEEE Trans. Biomed. Circuits Syst. 9(2), 188–196 (2015)
Pouget, A., Dayan, P., Zemel, R.: Information processing with population codes. Nat. Rev. Neurosci. 1(2), 125–132 (2000)
Shrestha, S.B.: Supervised learning in multilayer spiking neural network. Ph.D. thesis, Nanyang Technological University (2017)
Funding
The authors received no specific grant from any individual or organization.
Author information
Authors and Affiliations
Contributions
AS conceived the idea, conducted experiments, and wrote the manuscript. MIV and SHP analyzed the experiments and the feasibility of the work, and reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Siddique, A., Vai, M.I. & Pun, S.H. A low-cost, high-throughput neuromorphic computer for online SNN learning. Cluster Comput (2023). https://doi.org/10.1007/s10586-023-04093-9
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10586-023-04093-9