skip to main content
10.1145/3372297.3423338acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

DeepDyve: Dynamic Verification for Deep Neural Networks

Authors Info & Claims
Published:02 November 2020Publication History

ABSTRACT

Deep neural networks (DNNs) have become one of the enabling technologies in many safety-critical applications, e.g., autonomous driving and medical image analysis. DNN systems, however, suffer from various kinds of threats, such as adversarial example attacks and fault injection attacks. While there are many defense methods proposed against maliciously crafted inputs, solutions against faults presented in the DNN system itself (e.g., parameters and calculations) are far less explored. In this paper, we develop a novel lightweight fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification. The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much. We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve. Experimental results show that DeepDyve can reduce 90% of the risks at around 10% overhead.

Skip Supplemental Material Section

Supplemental Material

Copy of CCS2020_fpe019_YuLi - Pat Weeden.mov

mov

195.2 MB

References

  1. Todd M Austin. 1999. DIVA: A reliable substrate for deep submicron microarchitecture design. In Proceedings of the 32nd Annual ACM/IEEE International Symposium on Microarchitecture (MICRO). IEEE, 196--207.Google ScholarGoogle ScholarCross RefCross Ref
  2. Arash Azizimazreah, Yongbin Gu, Xiang Gu, and Lizhong Chen. 2018. Tolerating soft errors in deep learning accelerators with reliable on-chip memory designs. In IEEE International Conference on Networking, Architecture and Storage (NAS). IEEE, 1--10.Google ScholarGoogle ScholarCross RefCross Ref
  3. Lerong Chen, Jiawen Li, Yiran Chen, Qiuping Deng, Jiyuan Shen, Xiaoyao Liang, and Li Jiang. 2017. Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar. In Proceedings of the Conference on Design, Automation & Test in Europe (DATE). European Design and Automation Association, 19--24.Google ScholarGoogle ScholarCross RefCross Ref
  4. Yu-Hsin Chen, Tushar Krishna, Joel S Emer, and Vivienne Sze. 2016. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. In IEEE Journal of Solid-State Circuits (JSSC). IEEE, 127--138.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C-T Chiu, Kishan Mehrotra, Chilukuri K Mohan, and Sanjay Ranka. 1993. Robustness of feed forward neural networks. In IEEE International Conference on Neural Networks (ICNN). IEEE, 783--788.Google ScholarGoogle ScholarCross RefCross Ref
  6. L-C Chu and Benjamin W Wah. 1990. Fault tolerant neural networks with hybrid redundancy. In International Joint Conference on Neural Networks (IJCNN). IEEE, 639--649.Google ScholarGoogle ScholarCross RefCross Ref
  7. Jiacnao Deng, Yuntan Rang, Zidong Du, Ymg Wang, Huawei Li, Olivier Temam,Paolo Ienne, David Novo, Xiaowei Li, Yunji Chen, et al. 2015. Retraining-based timing error mitigation for hardware neural networks. In Design, Automation &Test in Europe Conference & Exhibition (DATE). IEEE, 593--596.Google ScholarGoogle Scholar
  8. Fernando Morgado Dias, Rui Borralho, Pedro Fontes, and Ana Antunes. 2010. FTSET-a software tool for fault tolerance evaluation and improvement. In Neural Computing and Applications (Neural. Comput. Appl.), Vol. 19. Springer, 701--712.Google ScholarGoogle Scholar
  9. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In 3rd International Conference on Learning Representations (ICLR). OpenReview.net.Google ScholarGoogle Scholar
  10. Song Han, Huizi Mao, and William J. Dally. 2016. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. In 4th International Conference on Learning Representations (ICLR). Open-Review.net.Google ScholarGoogle Scholar
  11. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2014. Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop(NIPS Workshop). Curran Associates Inc.Google ScholarGoogle Scholar
  12. Sanghyun Hong, Pietro Frigo, Yiitcan Kaya, Cristiano Giuffrida, and Tudor Dumitras,. 2019. Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks. In 28th USENIX Security Symposium (USENIX Security). USENIX Association, 497--514.Google ScholarGoogle Scholar
  13. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017).Google ScholarGoogle Scholar
  14. ISO. 2016. ISO-26262: Road vehicles -- Functional safety. ISO, Geneva, Switzerland.Google ScholarGoogle Scholar
  15. Norman P Jouppi and et al. 2017. In-data center performance analysis of a tensor processing unit. In ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA). IEEE, 1--12.Google ScholarGoogle Scholar
  16. Sung Kim, Patrick Howe, Thierry Moreau, Armin Alaghi, Luis Ceze, and Visvesh Sathe. 2018. MATIC: Learning around errors for efficient low-voltage neural net-work accelerators. In Design, Automation & Test in Europe Conference & Exhibition(DATE). IEEE, 1--6.Google ScholarGoogle Scholar
  17. Yoongu Kim, Daly, and et. al. 2014. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. In ACM/IEEE 41stInternational Symposium on Computer Architecture (ISCA). IEEE, 361--372.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Raghuraman Krishnamoorthi. 2018. Quantizing deep convolutional networks for efficient inference: A whitepaper.arXiv preprint arXiv:1806.08342.Google ScholarGoogle Scholar
  19. Alex Krizhevsky, Geoffrey Hinton, et al.2009.Learning multiple layers of features from tiny images. Technical Report. Citeseer.Google ScholarGoogle Scholar
  20. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial Machine Learning at Scale. In 5th International Conference on Learning Representations(ICLR). OpenReview.net.Google ScholarGoogle Scholar
  21. Ya Le and Xuan Yang. 2015. Tiny imagenet visual recognition challenge. Stanford CS 231N Course.Google ScholarGoogle Scholar
  22. Yann LeCun, John S Denker, and Sara A Solla. 1990. Optimal brain damage. In Advances in neural information processing systems (NIPS). Curran Associates Inc., 598--605.Google ScholarGoogle Scholar
  23. Guanpeng Li and et al. 2017. Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC). ACM, 8:1--8:12.Google ScholarGoogle Scholar
  24. Yu Li, Yannan Liu, Min Li, Ye Tian, Bo Luo, and Qiang Xu. 2019. D2NN: a fine-grained dual modular redundancy framework for deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference(ACSAC). ACM, 138--147.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Chenchen Liu, Miao Hu, John Paul Strachan, and Hai Li. 2017. Rescuing memristor-based neuromorphic design with high defects. In 54th ACM/EDAC/IEEE Design Automation Conference (DAC). IEEE, 1--6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yannan Liu, Lingxiao Wei, Bo Luo, and Qiang Xu. 2017. Fault injection attack on deep neural network. In IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 131--138.Google ScholarGoogle ScholarCross RefCross Ref
  27. Bo Luo, Yannan Liu, Lingxiao Wei, and Qiang Xu. 2018. Towards imperceptible and robust adversarial example attacks against neural networks. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI). AAAI Press,1652--1659.Google ScholarGoogle Scholar
  28. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In6th International Conference on Learning Representations (ICLR). OpenReview.net.Google ScholarGoogle Scholar
  29. Hamid Reza Mahdiani, Sied Mehdi Fakhraie, and Caro Lucas. 2012. Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE transactions on neural networks and learning systems (TNNLS)23, 1215--1228.Google ScholarGoogle Scholar
  30. Masato Matsubayashi, Akashi Satoh, and Jun Ishii. 2016. Clock glitch generator on SAKURA-G for fault injection attack against a cryptographic circuit. In IEEE 5th Global Conference on Consumer Electronics (GCCE). IEEE, 1--4.Google ScholarGoogle ScholarCross RefCross Ref
  31. Dongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the ACM SIGSAC conference on computer and communications security (CCS). ACM, 135--147.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the ACM on Asia conference on computer and communications security (Asia CCS). 506--519.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. 2019. Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search. In IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 1211--1220.Google ScholarGoogle Scholar
  34. Brandon Reagen, Udit Gupta, Lillian Pentecost, Paul Whatmough, Sae Kyu Lee, Niamh Mulholland, David Brooks, and Gu-Yeon Wei. 2018. Ares: A framework for quantifying the resilience of deep neural networks. In 55th ACM/ESDA/IEEE Design Automation Conference (DAC). IEEE, 1--6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Brandon Reagen, Paul Whatmough, and et. al. Adolf. 2016. Minerva: Enabling low-power, highly-accurate deep neural network accelerators. In ACM/IEEE 43rdAnnual International Symposium on Computer Architecture (ISCA). IEEE, 267--278.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Yolan Romailler and Sylvain Pelissier. 2017. Practical fault attack against theEd25519 and EdDSA signature schemes. In Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC). IEEE, 17--24.Google ScholarGoogle Scholar
  37. Christoph Schorn, Thomas Elsken, Sebastian Vogel, Armin Runge, Andre Guntoro, and Gerd Ascheid. 2020. Automated design of error-resilient and hardware-efficient deep neural networks. In Neural Computing and Applications (Neural. Comput. Appl.). Springer.Google ScholarGoogle Scholar
  38. Christoph Schorn, Andre Guntoro, and Gerd Ascheid. 2018. Efficient On-Line Error Detection and Mitigation for Deep Neural Network Accelerators. In International Conference on Computer Safety, Reliability, and Security (SAFECOMP). Springer, 205--219.Google ScholarGoogle Scholar
  39. Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. 2012. Man vs.computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks32, 323--332.Google ScholarGoogle Scholar
  40. Mingxing Tan and Quoc V. Le. 2019. Efficient Net: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning (ICML), Vol. 97. PMLR, 6105--6114.Google ScholarGoogle Scholar
  41. Lixue Xia, Mengyun Liu, Xuefei Ning, Krishnendu Chakrabarty, and Yu Wang. 2017. Fault-tolerant training with on-line fault detection for RRAM-based neural computing systems. In Proceedings of the 54th Annual Design Automation Conference 2017 (DAC). ACM, 33:1--33:6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Zheyu Yan, Yiyu Shi, Wang Liao, Masanori Hashimoto, Xichuan Zhou, and Cheng Zhuo. 2020. When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies. In25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 163--168.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Lita Yang and Boris Murmann. 2017. SRAM voltage scaling for energy-efficient convolutional neural networks. In 18th International Symposium on Quality Electronic Design (ISQED). IEEE, 7--12.Google ScholarGoogle ScholarCross RefCross Ref
  44. Fan Yao, Adnan Siraj Rakin, and Deliang Fan. 2020. DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips. In 29th USENIX Security Symposium (USENIX Security). USENIX Association.Google ScholarGoogle Scholar
  45. Pu Zhao, Siyue Wang, Cheng Gongye, Yanzhi Wang, Yunsi Fei, and Xue Lin. 2019. Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks. In Proceedings of the 56th Annual Design Automation Conference 2019(DAC). ACM, 165.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Chaitali Chakrabarti Zhezhi HeAdnan Siraj Rakin, Jingtao Li and Deliang Fan.2020. Defending and Harnessing the Bit-Flip based Adversarial Weight Attack. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE,14083--14091.Google ScholarGoogle Scholar

Index Terms

  1. DeepDyve: Dynamic Verification for Deep Neural Networks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CCS '20: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security
        October 2020
        2180 pages
        ISBN:9781450370899
        DOI:10.1145/3372297

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 November 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,261of6,999submissions,18%

        Upcoming Conference

        CCS '24
        ACM SIGSAC Conference on Computer and Communications Security
        October 14 - 18, 2024
        Salt Lake City , UT , USA

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader