skip to main content
10.1145/3546790.3546801acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiconsConference Proceedingsconference-collections
research-article

Dictionary Learning with Accumulator Neurons

Published:07 September 2022Publication History

ABSTRACT

The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel’s Loihi processor. Here, we focus on the problem of inferring sparse representations from streaming video using dictionaries of spatiotemporal features optimized in an unsupervised manner for sparse reconstruction. Non-spiking LCA has previously been used to achieve unsupervised learning of spatiotemporal dictionaries composed of convolutional kernels from raw, unlabeled video. We demonstrate how unsupervised dictionary learning with spiking LCA (S-LCA) can be efficiently implemented using accumulator neurons, which combine a conventional leaky-integrate-and-fire (LIF) spike generator with an additional state variable that is used to minimize the difference between the integrated input and the spiking output. We demonstrate dictionary learning across a wide range of dynamical regimes, from graded to intermittent spiking, for inferring sparse representations of both static images drawn from the CIFAR database as well as video frames captured from a DVS camera. On a classification task that requires identification of the suite from a deck of cards being rapidly flipped through as viewed by a DVS camera, we find essentially no degradation in performance as the LCA model used to infer sparse spatiotemporal representations migrates from graded to spiking. We conclude that accumulator neurons are likely to provide a powerful enabling component of future neuromorphic hardware for implementing online unsupervised learning of spatiotemporal dictionaries optimized for sparse reconstruction of streaming video from event based DVS cameras.

References

  1. Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam, 2015. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 34, 10(2015), 1537–1557.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence Stewart, Daniel Rasmussen, Xuan Choo, Aaron Voelker, and Chris Eliasmith. 2014. Nengo: a Python tool for building large-scale functional brain models. Frontiers in Neuroinformatics 7, 48 (2014), 1–13. https://doi.org/10.3389/fninf.2013.00048Google ScholarGoogle ScholarCross RefCross Ref
  3. K. Boahen. 2017. A neuromorph’s prospectus. Computing in Science Engineering 19, 2 (March 2017), 14–28. https://doi.org/10.1109/MCSE.2017.33Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Richard G. Baraniuk Christopher J. Rozell, Don H. Johnson and Bruno A. Olshausen. 2008. Sparse coding via thresholding and local competition in neural circuits. Neural Computation 20, 10 (2008), 2526–2563.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 1 (2018), 82–99.Google ScholarGoogle ScholarCross RefCross Ref
  6. Garrett T Kenyon. 2010. Extreme synergy: Spatiotemporal correlations enable rapid image reconstruction from computer-generated spike trains. Journal of vision 10, 3 (2010), 21–21.Google ScholarGoogle ScholarCross RefCross Ref
  7. Edward Kim, Darryl Hannan, and Garrett T. Kenyon. 2017. Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons. CoRR abs/1711.07998(2017). arXiv:1711.07998http://arxiv.org/abs/1711.07998Google ScholarGoogle Scholar
  8. Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical Report.Google ScholarGoogle Scholar
  9. Delbruck T Lichtsteiner P., Posch C.2008. A 128 X 128 120 dB 15 us Latency Asynchronous Temporal Contrast Vision Sensor. IEEE Journal of Solid State Circuits 43, 2 (2008), 566–575.Google ScholarGoogle ScholarCross RefCross Ref
  10. Sheng Y Lundquist, Melanie Mitchell, and Garrett T Kenyon. 2017. Sparse Coding on Stereo Video for Object Detection. In workshop on Learning with Limited Labeled Data: Weak Supervision and Beyond, NIPS 2017. NIPS.Google ScholarGoogle Scholar
  11. Sheng Y Lundquist, Dylan M Paiton, Peter F Schultz, and Garrett T Kenyon. 2016. Sparse encoding of binocular images for depth inference. In Image Analysis and Interpretation (SSIAI), 2016 IEEE Southwest Symposium on. IEEE, 121–124.Google ScholarGoogle ScholarCross RefCross Ref
  12. Garrick Orchard, Edward Paxon Frady, Daniel Ben Dayan Rubin, Sophia Sanborn, Sumit Bam Shrestha, Friedrich T. Sommer, and Mike Davies. 2021. Efficient Neuromorphic Signal Processing with Loihi 2. CoRR abs/2111.03746(2021). arXiv:2111.03746https://arxiv.org/abs/2111.03746Google ScholarGoogle Scholar
  13. E. Painkras, L. A. Plana, J. Garside, S. Temple, S. Davidson, J. Pepper, D. Clark, C. Patterson, and S. Furber. 2012. SpiNNaker: A multi-core System-on-Chip for massively-parallel neural net simulation. In Proceedings of the IEEE 2012 Custom Integrated Circuits Conference. 1–4. https://doi.org/10.1109/CICC.2012.6330636Google ScholarGoogle ScholarCross RefCross Ref
  14. Dylan M Paiton, Charles G Frye, Sheng Y Lundquist, Joel D Bowen, Ryan Zarcone, and Bruno A Olshausen. 2020. Selectivity and robustness of sparse coding networks. Journal of vision 20, 12 (2020), 10–10.Google ScholarGoogle ScholarCross RefCross Ref
  15. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdfGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  16. Christopher J Rozell, Don H Johnson, Richard G Baraniuk, and Bruno A Olshausen. 2008. Sparse coding via thresholding and local competition in neural circuits. Neural computation 20, 10 (2008), 2526–2563.Google ScholarGoogle Scholar
  17. Rufin Van Rullen and Simon J Thorpe. 2001. Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Neural computation 13, 6 (2001), 1255–1283.Google ScholarGoogle Scholar
  18. Teresa Serrano-Gotarredona and Bernab’e Linares-Barranco. 2015. Poker-DVS and MNIST-DVS. Their history, how they were made, and other details. Frontiers in neuroscience 9 (2015), 481.Google ScholarGoogle Scholar
  19. Greg J Stephens, Sergio Neuenschwander, John S George, Wolf Singer, and Garrett T Kenyon. 2006. See globally, spike locally: oscillations in a retinal model encode large visual features. Biological cybernetics 95, 4 (2006), 327–348.Google ScholarGoogle Scholar
  20. Michael Teti, Emily Meyer, and Garrett Kenyon. 2020. Can Lateral Inhibition for Sparse Coding Help Explain V1 Neuronal Responses To Natural Stimuli?. In 2020 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI). IEEE, 120–124.Google ScholarGoogle ScholarCross RefCross Ref
  21. Simon Thorpe, Arnaud Delorme, and Rufin Van Rullen. 2001. Spike-based strategies for rapid processing. Neural networks 14, 6-7 (2001), 715–725.Google ScholarGoogle Scholar
  22. Aaron R. Voelker, Daniel Rasmussen, and Chris Eliasmith. 2020. A Spike in Performance: Training Hybrid-Spiking Neural Networks with Quantized Activation Functions. CoRR abs/2002.03553(2020). arXiv:2002.03553https://arxiv.org/abs/2002.03553Google ScholarGoogle Scholar
  23. Yijing Watkins, Austin Thresher, David Mascarenas, and Garrett T. Kenyon. 2018. Sparse Coding Enables the Reconstruction of High-Fidelity Images and Video from Retinal Spike Trains. In Proceedings of the International Conference on Neuromorphic Systems (Knoxville, TN, USA) (ICONS ’18). ACM, New York, NY, USA, Article 8, 5 pages. https://doi.org/10.1145/3229884.3229892Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Yijing Watkins, Austin Thresher, Peter F. Schultz, Andreas Wild, Andrew Sornborger, and Garrett T. Kenyon. 2019. Unsupervised Dictionary Learning via a Spiking Locally Competitive Algorithm. In Proceedings of the International Conference on Neuromorphic Systems (Knoxville, TN, USA) (ICONS ’19). Association for Computing Machinery, New York, NY, USA, Article 11, 5 pages. https://doi.org/10.1145/3354265.3354276Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Mengchen Zhu and Christopher J. Rozell. 2013. Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System. PLOS Computational Biology 9, 8 (08 2013), 1–15. https://doi.org/10.1371/journal.pcbi.1003191Google ScholarGoogle Scholar

Index Terms

  1. Dictionary Learning with Accumulator Neurons

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format