skip to main content
10.1145/3316781.3323472acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

Building Robust Machine Learning Systems: Current Progress, Research Challenges, and Opportunities

Published:02 June 2019Publication History

ABSTRACT

Machine learning, in particular deep learning, is being used in almost all the aspects of life to facilitate humans, specifically in mobile and Internet of Things (IoT)-based applications. Due to its state-of-the-art performance, deep learning is also being employed in safety-critical applications, for instance, autonomous vehicles. Reliability and security are two of the key required characteristics for these applications because of the impact they can have on human's life. Towards this, in this paper, we highlight the current progress, challenges and research opportunities in the domain of robust systems for machine learning-based applications.

References

  1. Hassan Ali et al. 2018. QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks. arXiv:1811.01437 (2018).Google ScholarGoogle Scholar
  2. Jimmy Ba et al. 2014. Do Deep Nets Really Need to be Deep? In Advances in Neural Information Processing Systems 27. Curran Associates, Inc., 2654--2662. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Nicholas Carlini et al. 2016. Towards evaluating the robustness of neural networks. arXiv preprint arXiv:1608.04644 (2016).Google ScholarGoogle Scholar
  4. Anirban Chakraborty et al. 2018. Adversarial Attacks and Defences: A Survey. arXiv:1810.00069 (2018).Google ScholarGoogle Scholar
  5. J. Deng et al. 2009. ImageNet: A large-scale hierarchical image database. In CVPR. 248--255.Google ScholarGoogle Scholar
  6. Dan Ernst et al. 2004. Razor: circuit-level correction of timing errors for low-power operation. IEEE Micro 24, 6 (2004), 10--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Andre Esteva et al. 2019. A guide to deep learning in healthcare. Nature medicine 25, 1 (2019), 24.Google ScholarGoogle Scholar
  8. Maximilian Fink et al. 2019. Deep Learning-Based Multi-scale Multi-object Detection and Classification for Autonomous Driving. In Fahrerassistenzsysteme. Springer.Google ScholarGoogle Scholar
  9. Anteneh Gebregiorgis et al. 2017. Error propagation aware timing relaxation for approximate near threshold computing. In DAC. ACM, 77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Tianyu Gu et al. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv:1708.06733 (2017).Google ScholarGoogle Scholar
  11. Song Han et al. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149 (2015).Google ScholarGoogle Scholar
  12. Muhammad Abdullah Hanif et al. 2018. Robust Machine Learning Systems: Reliability and Security for Deep Neural Networks. In IOLTS. IEEE, 257--260.Google ScholarGoogle Scholar
  13. Ling Huang et al. 2011. Adversarial machine learning. In AISec. ACM, 43--58.Google ScholarGoogle Scholar
  14. Norman P Jouppi et al. 2017. In-datacenter performance analysis of a tensor processing unit. In ISCA. IEEE, 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Faiq Khalid et al. 2019. FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning. In DATE. IEEE.Google ScholarGoogle Scholar
  16. Faiq Khalid et al. 2019. RED-Attack: Resource Efficient Decision based Attack for Machine Learning. arXiv:1901.10258 (2019).Google ScholarGoogle Scholar
  17. Faiq Khalid et al. 2019. TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. arXiv:1811.01031 (2019).Google ScholarGoogle Scholar
  18. Y. Lecun et al. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (Nov 1998), 2278--2324.Google ScholarGoogle ScholarCross RefCross Ref
  19. Kang Liu et al. 2018. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID. Springer, 273--294.Google ScholarGoogle Scholar
  20. Alberto Marchisio et al. 2019. SNN under Attack: are Spiking Deep Belief Networks vulnerable to Adversarial Examples? arXiv:1902.01147 (2019).Google ScholarGoogle Scholar
  21. Nicolas Papernot et al. 2016. The limitations of deep learning in adversarial settings. In EuroS&P. IEEE, 372--387.Google ScholarGoogle Scholar
  22. Bharathwaj Raghunathan et al. 2013. Cherry-picking: exploiting process variations in dark-silicon homogeneous chip multi-processors. In DATE. IEEE, 39--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Semeen Rehman, Muhammad Shafique, and Jörg Henkel. 2016. Reliable Software for Unreliable Hardware: A Cross Layer Perspective. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Muhammad Shafique et al. 2014. The EDA challenges in the dark silicon era: Temperature, reliability, and variability perspectives. In DAC. ACM, 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Vivienne Sze et al. 2017. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 12 (2017), 2295--2329.Google ScholarGoogle ScholarCross RefCross Ref
  26. Christian Szegedy et al. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).Google ScholarGoogle Scholar
  27. Hammad Tariq et al. 2018. SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks. rXiv:1811.01443 (2018).Google ScholarGoogle Scholar
  28. Abhishek Tiwari et al. 2008. Facelift: Hiding and slowing down aging in multicores. In MICRO. IEEE Computer Society, 129--140. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Ramakrishna Vadlamani et al. 2010. Multicore soft error rate stabilization using adaptive dual modular redundancy. In DATE. IEEE, 27--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Binghui Wang and Neil Zhenqiang Gong. {n. d.}. Stealing hyperparameters in machine learning. In IEEE S&P.Google ScholarGoogle Scholar
  31. Paul N Whatmough et al. 2013. Circuit-level timing error tolerance for low-power DSP filters and transforms. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 21, 6 (2013), 989--999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Jeff Zhang et al. 2018. Thundervolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Learning Accelerators. In DAC. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Jeff Jun Zhang et al. 2018. Analyzing and mitigating the impact of permanent faults on a systolic array based neural network accelerator. In VTS. IEEE, 1--6.Google ScholarGoogle Scholar
  34. Jeff Jun Zhang et al. 2018. FATE: fast and accurate timing error prediction framework for low power DNN accelerator design. In ICCAD. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    DAC '19: Proceedings of the 56th Annual Design Automation Conference 2019
    June 2019
    1378 pages
    ISBN:9781450367257
    DOI:10.1145/3316781

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 2 June 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate1,770of5,499submissions,32%

    Upcoming Conference

    DAC '24
    61st ACM/IEEE Design Automation Conference
    June 23 - 27, 2024
    San Francisco , CA , USA

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader