Skip to main content

SEC-Learn: Sensor Edge Cloud for Federated Learning

Invited Paper

  • Conference paper
  • First Online:
  • 1027 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13227))

Abstract

Due to the slow-down of Moore’s Law and Dennard Scaling, new disruptive computer architectures are mandatory. One such new approach is Neuromorphic Computing, which is inspired by the functionality of the human brain. In this position paper, we present the projected SEC-Learn ecosystem, which combines neuromorphic embedded architectures with Federated Learning in the cloud, and performance with data protection and energy efficiency.

Keywords

Alphabetically sorted author names.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www.eenewseurope.com/news/edge-ai-chip-market-overtake-cloud-2025.

  2. 2.

    https://brainscales.kip.uni-heidelberg.de/.

  3. 3.

    https://brainchipinc.com/akida-neuromorphic-system-on-chip/.

  4. 4.

    https://developer.apple.com/videos/play/wwdc2019/708.

  5. 5.

    https://www.european-processor-initiative.eu/.

References

  1. Bekolay, T., et al.: Nengo: a Python tool for building large-scale functional brain models. Front. Neuroinform. 7, 48 (2014)

    Google Scholar 

  2. Beyer, S., et al.: FeFET: a versatile CMOS compatible device with game-changing potential. In: 2020 IEEE International Memory Workshop (IMW) (2020)

    Google Scholar 

  3. Bhowmick, A., et al.: Protection against reconstruction and its applications in private federated learning. arXiv:1812.00984 (2018)

  4. Blouw, P., et al.: Benchmarking keyword spotting efficiency on neuromorphic hardware (2018). arXiv: 1812.01739

  5. Burr, G.W., et al.: Neuromorphic computing using non-volatile memory. Adv. Phys. X 2(1), 89–124 (2017)

    Google Scholar 

  6. Chen, Y.-H., et al.: 14.5 Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. In: IEEE International Solid-State Circuits Conference (ISSCC), pp. 262–263 (2016)

    Google Scholar 

  7. Chen, Y.-H., et al.: Eyeriss v2: a flexible accelerator for emerging deep neural networks on mobile devices. IEEE J. Emerg. Sel. Topics Circ. Syst. 9(2), 292–308 (2019)

    Google Scholar 

  8. Conti, F., et al.: PULP: a ultra-low power parallel accelerator for energy- efficient and flexible embedded vision. J. Signal Process. Syst. 84(3), 339–354 (2016)

    Google Scholar 

  9. Culurciello, E., Etienne-Cummings, R., Boahen, K.: Arbitrated address event representation digital image sensor. In: 2001 IEEE International Solid-State Circuits Conference. Digest of Technical Papers. ISSCC, pp. 92–93 (2001)

    Google Scholar 

  10. Davies, M., et al.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)

    Google Scholar 

  11. Dosovitskiy, A., et al.: CARLA: an open urban driving simulator. In: Annual Conference on Robot Learning. Proceedings of Machine Learning Research (PMLR), Vol. 78, pp. 1–16 (2017)

    Google Scholar 

  12. Furber, S., Bogdan, P. (eds.) SpiNNaker: A Spiking Neural Network Architecture. Boston-Delft: now publishers Inc. (2020)

    Google Scholar 

  13. Gewaltig, M.-O., Diesmann, M.: Nest(Neural simulation tool). Scholarpedia 2(4), 1430 (2007)

    Google Scholar 

  14. Goodman, D.F.M., Brette, R.: The Brian simulator. Front. Neurosci. 3, 26 (2009)

    Google Scholar 

  15. Hsieh, K., et al.: The non-IID data quagmire of decentralized machine learning. In: International Conference on Machine Learning. PMLR, pp. 4387–4398 (2020)

    Google Scholar 

  16. Ielmini, D., Wong, H.-S.P.: In-memory computing with resistive switching devices. Nature Electron. 1(6), 333–343 (2018)

    Google Scholar 

  17. Johnson, D.S., Grollmisch, S.: Techniques improving the robustness of deep learning models for industrial sound analysis. In: European Signal Processing Conference (EUSIPCO), pp. 81–85 (2020)

    Google Scholar 

  18. Johnson, D.S., et al.: DESED-FL and URBAN-FL: federated learning datasets for sound event detection. eprint: 2102.08833 (2020)

    Google Scholar 

  19. Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: Annual International Symposium on Computer Architecture (ISCA), pp. 1–12 (2017)

    Google Scholar 

  20. Jung, M., et al.: Driving into the memory wall: the role of memory for advanced driver assistance systems and autonomous driving. In: International Symposium on Memory Systems (MEMSYS) (2018)

    Google Scholar 

  21. Kairouz, P., et al.: Advances and Open Problems in Federated Learning. eprint: 1912.04977 (2019)

    Google Scholar 

  22. Kazemi, A., et al.: In-memory nearest neighbor search with FeFET multi- bit content-addressable memories. In: Design Automation & Test in Europe (DATE) (2021)

    Google Scholar 

  23. Kuhn, T., et al.: FERAL – framework for simulator coupling on requirements and architecture level. In: ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE), pp. 11–22 (2013)

    Google Scholar 

  24. Lederer, M., et al.: Ferroelectric field effect transistors as a synapse for neuromorphic application. IEEE Trans. Electron Dev. 68(5), 652–665 (2021)

    Google Scholar 

  25. Leroy, D., et al.: Federated learning for keyword spotting. In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (019)

    Google Scholar 

  26. Li, C., et al.: A scalable design of multi-bit ferroelectric content addressable memory for data-centric computing. In: IEEE International Electron Device Meeting (IEDM) (2020)

    Google Scholar 

  27. Lu, A., et al.: Benchmark of the compute-in-memory-based DNN accelerator with area constraint. IEEE Trans. Very Large Scale Integration (VLSI) Syst. 28(9), 1945–1952 (2020)

    Google Scholar 

  28. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. (2015)

    Google Scholar 

  29. Maass, W., Bishop, C.M. (eds.): Pulsed Neural Networks. A Bradford Book. Cambridge, Mass. MIT Press (2001)

    Google Scholar 

  30. Markram, H., et al.: Introducing the human brain project. Procedia Comput. Sci. 7, 39–42 (2011)

    Article  Google Scholar 

  31. McMahan, B., et al.: Communication-efficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics (AISTATS) (2017)

    Google Scholar 

  32. Merolla, P.A., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)

    Google Scholar 

  33. Moloney, D., et al.: Myriad 2: eye of the computational vision storm. In: 2014 IEEE Hot Chips 26 Symposium (HCS), pp. 1–18 (2014)

    Google Scholar 

  34. Neftci, E.O., Mostafa, H., Zenke, F.: Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 36(6), 51–63 (2019)

    Google Scholar 

  35. Sattler, F., Müller, K.-R., Samek, W.: Clustered Federated Learning: Model- Agnostic Distributed Multi-Task Optimization under Privacy Constraints (2019). arXiv:1910.01991

  36. Sattler, F., et al.: Robust and communication-efficient federated learning from non-i.i.d. data. IEEE Trans. Neural Netw. Learn. Syst. (2019)

    Google Scholar 

  37. Sengupta, A., et al.: Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13 (2019)

    Google Scholar 

  38. Shang, W., et al.: Understanding and improving convolutional neural networks via concatenated rectified linear units. In: International Conference on Machine Learning, pp. 2217–2225 (2016)

    Google Scholar 

  39. Soliman, T., et al.: A ferroelectric FET based in-memory architecture for multi-precision neural networks. In: IEEE International System-on-Chip Conference (SOCC) (2020)

    Google Scholar 

  40. Soliman, T., et al.: Ultra low power flexible precision FeFET based analog inmemory computing. In: IEEE International Electron Device Meeting (IEDM) (2020)

    Google Scholar 

  41. Steiner, L., et al.: DRAMSys4.0: a fast and cycle-accurate systemc/TLMBased DRAM simulator. In: Embedded Computer Systems: Architectures, Modeling, and Simulation (2020)

    Google Scholar 

  42. Sze, V., et al.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)

    Google Scholar 

  43. Wu, Y., He, K.: Group normalization. In: European Conference on Computervision (ECCV), pp. 3–19 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Jung .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Aichroth, P. et al. (2022). SEC-Learn: Sensor Edge Cloud for Federated Learning. In: Orailoglu, A., Jung, M., Reichenbach, M. (eds) Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2021. Lecture Notes in Computer Science, vol 13227. Springer, Cham. https://doi.org/10.1007/978-3-031-04580-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-04580-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-04579-0

  • Online ISBN: 978-3-031-04580-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics