skip to main content
10.1145/3528416.3530870acmconferencesArticle/Chapter ViewAbstractPublication PagescfConference Proceedingsconference-collections
poster

Hardware acceleration of complex machine learning models through modern high-level synthesis

Published:17 May 2022Publication History

ABSTRACT

Machine learning (ML) and deep learning algorithms are well suited to process and analyze large amounts of data, as it has been repeatedly proven in applications such as image classification, natural language processing, or recommendation systems. Both ML training and inference are compute- and memory-intensive, leading to widespread adoption of heterogeneous systems containing specialized accelerators. While graphic processing units (GPUs) are the established platform of choice to accelerate training, they are often too power-hungry to run inference tasks, or cannot meet the strict latency requirements of scientific experiments. A variety of custom solutions implemented as field programmable gate arrays (FPGAs) or application-specific circuit (ASICs) have been proposed in their place, ranging from generic "neural processors" to accelerators that focus on a narrow set of models with great efficiency.

References

  1. M. Blott et al. 2018. FINN-R: An end-to-end deep-learning framework for fast exploration of quantized neural networks. ACM Transactions on Reconfigurable Technology and Systems (TRETS) 11, 3 (2018), 1--23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. S. Curzel et al. 2021. Automated Generation of Integrated Digital and Spiking Neuromorphic Machine Learning Accelerators. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). 1--7.Google ScholarGoogle Scholar
  3. S. Curzel et al. 2021. De-specializing an HLS library for Deep Neural Networks: improvements upon hls4ml. arXiv:2103.13060 (2021).Google ScholarGoogle Scholar
  4. J. Duarte et al. 2018. Fast inference of deep neural networks in FPGAs for particle physics. Journal of Instrumentation 13, 07 (2018), P07027.Google ScholarGoogle ScholarCross RefCross Ref
  5. F. Ferrandi et al. 2021. Bambu: an Open-Source Research Framework for the High-Level Synthesis of Complex Applications. In DAC 2021: 58th ACM/IEEE Design Automation Conference. 1327--1330.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Lattner et al. 2020. MLIR: A Compiler Infrastructure for the End of Moore's Law. arXiv:2002.11054 (2020).Google ScholarGoogle Scholar
  7. J. Zhang et al. 2021. Towards Automatic and Agile AI/ML Accelerator Design with End-to-End Synthesis. In IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP). IEEE.Google ScholarGoogle Scholar

Index Terms

  1. Hardware acceleration of complex machine learning models through modern high-level synthesis

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          CF '22: Proceedings of the 19th ACM International Conference on Computing Frontiers
          May 2022
          321 pages
          ISBN:9781450393386
          DOI:10.1145/3528416

          Copyright © 2022 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 17 May 2022

          Check for updates

          Qualifiers

          • poster

          Acceptance Rates

          Overall Acceptance Rate240of680submissions,35%
        • Article Metrics

          • Downloads (Last 12 months)31
          • Downloads (Last 6 weeks)4

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader