ABSTRACT
Machine learning, in particular deep learning, is being used in almost all the aspects of life to facilitate humans, specifically in mobile and Internet of Things (IoT)-based applications. Due to its state-of-the-art performance, deep learning is also being employed in safety-critical applications, for instance, autonomous vehicles. Reliability and security are two of the key required characteristics for these applications because of the impact they can have on human's life. Towards this, in this paper, we highlight the current progress, challenges and research opportunities in the domain of robust systems for machine learning-based applications.
- Hassan Ali et al. 2018. QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks. arXiv:1811.01437 (2018).Google Scholar
- Jimmy Ba et al. 2014. Do Deep Nets Really Need to be Deep? In Advances in Neural Information Processing Systems 27. Curran Associates, Inc., 2654--2662. Google ScholarDigital Library
- Nicholas Carlini et al. 2016. Towards evaluating the robustness of neural networks. arXiv preprint arXiv:1608.04644 (2016).Google Scholar
- Anirban Chakraborty et al. 2018. Adversarial Attacks and Defences: A Survey. arXiv:1810.00069 (2018).Google Scholar
- J. Deng et al. 2009. ImageNet: A large-scale hierarchical image database. In CVPR. 248--255.Google Scholar
- Dan Ernst et al. 2004. Razor: circuit-level correction of timing errors for low-power operation. IEEE Micro 24, 6 (2004), 10--20. Google ScholarDigital Library
- Andre Esteva et al. 2019. A guide to deep learning in healthcare. Nature medicine 25, 1 (2019), 24.Google Scholar
- Maximilian Fink et al. 2019. Deep Learning-Based Multi-scale Multi-object Detection and Classification for Autonomous Driving. In Fahrerassistenzsysteme. Springer.Google Scholar
- Anteneh Gebregiorgis et al. 2017. Error propagation aware timing relaxation for approximate near threshold computing. In DAC. ACM, 77. Google ScholarDigital Library
- Tianyu Gu et al. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv:1708.06733 (2017).Google Scholar
- Song Han et al. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149 (2015).Google Scholar
- Muhammad Abdullah Hanif et al. 2018. Robust Machine Learning Systems: Reliability and Security for Deep Neural Networks. In IOLTS. IEEE, 257--260.Google Scholar
- Ling Huang et al. 2011. Adversarial machine learning. In AISec. ACM, 43--58.Google Scholar
- Norman P Jouppi et al. 2017. In-datacenter performance analysis of a tensor processing unit. In ISCA. IEEE, 1--12. Google ScholarDigital Library
- Faiq Khalid et al. 2019. FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning. In DATE. IEEE.Google Scholar
- Faiq Khalid et al. 2019. RED-Attack: Resource Efficient Decision based Attack for Machine Learning. arXiv:1901.10258 (2019).Google Scholar
- Faiq Khalid et al. 2019. TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. arXiv:1811.01031 (2019).Google Scholar
- Y. Lecun et al. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (Nov 1998), 2278--2324.Google ScholarCross Ref
- Kang Liu et al. 2018. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID. Springer, 273--294.Google Scholar
- Alberto Marchisio et al. 2019. SNN under Attack: are Spiking Deep Belief Networks vulnerable to Adversarial Examples? arXiv:1902.01147 (2019).Google Scholar
- Nicolas Papernot et al. 2016. The limitations of deep learning in adversarial settings. In EuroS&P. IEEE, 372--387.Google Scholar
- Bharathwaj Raghunathan et al. 2013. Cherry-picking: exploiting process variations in dark-silicon homogeneous chip multi-processors. In DATE. IEEE, 39--44. Google ScholarDigital Library
- Semeen Rehman, Muhammad Shafique, and Jörg Henkel. 2016. Reliable Software for Unreliable Hardware: A Cross Layer Perspective. Springer. Google ScholarDigital Library
- Muhammad Shafique et al. 2014. The EDA challenges in the dark silicon era: Temperature, reliability, and variability perspectives. In DAC. ACM, 1--6. Google ScholarDigital Library
- Vivienne Sze et al. 2017. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 12 (2017), 2295--2329.Google ScholarCross Ref
- Christian Szegedy et al. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).Google Scholar
- Hammad Tariq et al. 2018. SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks. rXiv:1811.01443 (2018).Google Scholar
- Abhishek Tiwari et al. 2008. Facelift: Hiding and slowing down aging in multicores. In MICRO. IEEE Computer Society, 129--140. Google ScholarDigital Library
- Ramakrishna Vadlamani et al. 2010. Multicore soft error rate stabilization using adaptive dual modular redundancy. In DATE. IEEE, 27--32. Google ScholarDigital Library
- Binghui Wang and Neil Zhenqiang Gong. {n. d.}. Stealing hyperparameters in machine learning. In IEEE S&P.Google Scholar
- Paul N Whatmough et al. 2013. Circuit-level timing error tolerance for low-power DSP filters and transforms. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 21, 6 (2013), 989--999. Google ScholarDigital Library
- Jeff Zhang et al. 2018. Thundervolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Learning Accelerators. In DAC. Google ScholarDigital Library
- Jeff Jun Zhang et al. 2018. Analyzing and mitigating the impact of permanent faults on a systolic array based neural network accelerator. In VTS. IEEE, 1--6.Google Scholar
- Jeff Jun Zhang et al. 2018. FATE: fast and accurate timing error prediction framework for low power DNN accelerator design. In ICCAD. ACM. Google ScholarDigital Library
Recommendations
Learning key steps to attack deep reinforcement learning agents
AbstractDeep reinforcement learning agents are vulnerable to adversarial attacks. In particular, recent studies have shown that attacking a few key steps can effectively decrease the agent’s cumulative reward. However, all existing attacking methods ...
Machine learning and the Internet of Things security: Solutions and open challenges
Highlights- Emphasizing security challenges and requirements of IoT-based systems.
- ...
AbstractInternet of Things (IoT) is a pervasively-used technology for the last few years. IoT technologies are also responsible for intensifying various everyday smart applications improving the standard of living. However, the inter-crossing ...
Machine Learning: The State of the Art
The two fundamental problems in machine learning (ML) are statistical analysis and algorithm design. The former tells us the principles of the mathematical models that we establish from the observation data. The latter defines the conditions on which ...
Comments