ABSTRACT
The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel’s Loihi processor. Here, we focus on the problem of inferring sparse representations from streaming video using dictionaries of spatiotemporal features optimized in an unsupervised manner for sparse reconstruction. Non-spiking LCA has previously been used to achieve unsupervised learning of spatiotemporal dictionaries composed of convolutional kernels from raw, unlabeled video. We demonstrate how unsupervised dictionary learning with spiking LCA (S-LCA) can be efficiently implemented using accumulator neurons, which combine a conventional leaky-integrate-and-fire (LIF) spike generator with an additional state variable that is used to minimize the difference between the integrated input and the spiking output. We demonstrate dictionary learning across a wide range of dynamical regimes, from graded to intermittent spiking, for inferring sparse representations of both static images drawn from the CIFAR database as well as video frames captured from a DVS camera. On a classification task that requires identification of the suite from a deck of cards being rapidly flipped through as viewed by a DVS camera, we find essentially no degradation in performance as the LCA model used to infer sparse spatiotemporal representations migrates from graded to spiking. We conclude that accumulator neurons are likely to provide a powerful enabling component of future neuromorphic hardware for implementing online unsupervised learning of spatiotemporal dictionaries optimized for sparse reconstruction of streaming video from event based DVS cameras.
- Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam, 2015. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 34, 10(2015), 1537–1557.Google ScholarDigital Library
- Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence Stewart, Daniel Rasmussen, Xuan Choo, Aaron Voelker, and Chris Eliasmith. 2014. Nengo: a Python tool for building large-scale functional brain models. Frontiers in Neuroinformatics 7, 48 (2014), 1–13. https://doi.org/10.3389/fninf.2013.00048Google ScholarCross Ref
- K. Boahen. 2017. A neuromorph’s prospectus. Computing in Science Engineering 19, 2 (March 2017), 14–28. https://doi.org/10.1109/MCSE.2017.33Google ScholarDigital Library
- Richard G. Baraniuk Christopher J. Rozell, Don H. Johnson and Bruno A. Olshausen. 2008. Sparse coding via thresholding and local competition in neural circuits. Neural Computation 20, 10 (2008), 2526–2563.Google ScholarDigital Library
- Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 1 (2018), 82–99.Google ScholarCross Ref
- Garrett T Kenyon. 2010. Extreme synergy: Spatiotemporal correlations enable rapid image reconstruction from computer-generated spike trains. Journal of vision 10, 3 (2010), 21–21.Google ScholarCross Ref
- Edward Kim, Darryl Hannan, and Garrett T. Kenyon. 2017. Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons. CoRR abs/1711.07998(2017). arXiv:1711.07998http://arxiv.org/abs/1711.07998Google Scholar
- Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical Report.Google Scholar
- Delbruck T Lichtsteiner P., Posch C.2008. A 128 X 128 120 dB 15 us Latency Asynchronous Temporal Contrast Vision Sensor. IEEE Journal of Solid State Circuits 43, 2 (2008), 566–575.Google ScholarCross Ref
- Sheng Y Lundquist, Melanie Mitchell, and Garrett T Kenyon. 2017. Sparse Coding on Stereo Video for Object Detection. In workshop on Learning with Limited Labeled Data: Weak Supervision and Beyond, NIPS 2017. NIPS.Google Scholar
- Sheng Y Lundquist, Dylan M Paiton, Peter F Schultz, and Garrett T Kenyon. 2016. Sparse encoding of binocular images for depth inference. In Image Analysis and Interpretation (SSIAI), 2016 IEEE Southwest Symposium on. IEEE, 121–124.Google ScholarCross Ref
- Garrick Orchard, Edward Paxon Frady, Daniel Ben Dayan Rubin, Sophia Sanborn, Sumit Bam Shrestha, Friedrich T. Sommer, and Mike Davies. 2021. Efficient Neuromorphic Signal Processing with Loihi 2. CoRR abs/2111.03746(2021). arXiv:2111.03746https://arxiv.org/abs/2111.03746Google Scholar
- E. Painkras, L. A. Plana, J. Garside, S. Temple, S. Davidson, J. Pepper, D. Clark, C. Patterson, and S. Furber. 2012. SpiNNaker: A multi-core System-on-Chip for massively-parallel neural net simulation. In Proceedings of the IEEE 2012 Custom Integrated Circuits Conference. 1–4. https://doi.org/10.1109/CICC.2012.6330636Google ScholarCross Ref
- Dylan M Paiton, Charles G Frye, Sheng Y Lundquist, Joel D Bowen, Ryan Zarcone, and Bruno A Olshausen. 2020. Selectivity and robustness of sparse coding networks. Journal of vision 20, 12 (2020), 10–10.Google ScholarCross Ref
- Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdfGoogle ScholarDigital Library
- Christopher J Rozell, Don H Johnson, Richard G Baraniuk, and Bruno A Olshausen. 2008. Sparse coding via thresholding and local competition in neural circuits. Neural computation 20, 10 (2008), 2526–2563.Google Scholar
- Rufin Van Rullen and Simon J Thorpe. 2001. Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Neural computation 13, 6 (2001), 1255–1283.Google Scholar
- Teresa Serrano-Gotarredona and Bernab’e Linares-Barranco. 2015. Poker-DVS and MNIST-DVS. Their history, how they were made, and other details. Frontiers in neuroscience 9 (2015), 481.Google Scholar
- Greg J Stephens, Sergio Neuenschwander, John S George, Wolf Singer, and Garrett T Kenyon. 2006. See globally, spike locally: oscillations in a retinal model encode large visual features. Biological cybernetics 95, 4 (2006), 327–348.Google Scholar
- Michael Teti, Emily Meyer, and Garrett Kenyon. 2020. Can Lateral Inhibition for Sparse Coding Help Explain V1 Neuronal Responses To Natural Stimuli?. In 2020 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI). IEEE, 120–124.Google ScholarCross Ref
- Simon Thorpe, Arnaud Delorme, and Rufin Van Rullen. 2001. Spike-based strategies for rapid processing. Neural networks 14, 6-7 (2001), 715–725.Google Scholar
- Aaron R. Voelker, Daniel Rasmussen, and Chris Eliasmith. 2020. A Spike in Performance: Training Hybrid-Spiking Neural Networks with Quantized Activation Functions. CoRR abs/2002.03553(2020). arXiv:2002.03553https://arxiv.org/abs/2002.03553Google Scholar
- Yijing Watkins, Austin Thresher, David Mascarenas, and Garrett T. Kenyon. 2018. Sparse Coding Enables the Reconstruction of High-Fidelity Images and Video from Retinal Spike Trains. In Proceedings of the International Conference on Neuromorphic Systems (Knoxville, TN, USA) (ICONS ’18). ACM, New York, NY, USA, Article 8, 5 pages. https://doi.org/10.1145/3229884.3229892Google ScholarDigital Library
- Yijing Watkins, Austin Thresher, Peter F. Schultz, Andreas Wild, Andrew Sornborger, and Garrett T. Kenyon. 2019. Unsupervised Dictionary Learning via a Spiking Locally Competitive Algorithm. In Proceedings of the International Conference on Neuromorphic Systems (Knoxville, TN, USA) (ICONS ’19). Association for Computing Machinery, New York, NY, USA, Article 11, 5 pages. https://doi.org/10.1145/3354265.3354276Google ScholarDigital Library
- Mengchen Zhu and Christopher J. Rozell. 2013. Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System. PLOS Computational Biology 9, 8 (08 2013), 1–15. https://doi.org/10.1371/journal.pcbi.1003191Google Scholar
Index Terms
- Dictionary Learning with Accumulator Neurons
Recommendations
Apples-to-spikes: The first detailed comparison of LASSO solutions generated by a spiking neuromorphic processor
ICONS '22: Proceedings of the International Conference on Neuromorphic Systems 2022The Locally Competitive Algorithm (LCA) is a model of simple cells in the primary visual cortex, based on convex sparse coding via recurrent lateral competition between neighboring neurons. Previous work implemented spiking LCA (S-LCA) on the Loihi ...
Recognizing sound signals through spiking neurons and spike-timing-dependent plasticity
AIPR '19: Proceedings of the 2nd International Conference on Artificial Intelligence and Pattern RecognitionSpiking Neural Networks (SNNs) are regarded as brain-inspired neural networks. Most SNNs described spiking neurons with the leaky integrate-and-fire model, which does not incorporate biological properties of real neurons. In this paper, a model motivated ...
Learning High-Performance Spiking Neural Networks with Multi-Compartment Spiking Neurons
Image and GraphicsAbstractIn recent years, spiking neural networks (SNNs) have gained significant attention due to bio-inspired working mechanism. The VGG-like and ResNet-like architectures are widely used for SNNs modeling. However, the spiking features of such ...
Comments