Abstract
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker’s knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
Keywords
Download to read the full chapter text
Chapter PDF
References
Adobe: PDF Reference, sixth edn. version 1.7
Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: ASIACCS 2006: Proc. of the 2006 ACM Symp. on Information, Computer and Comm. Security, pp. 16–25. ACM, New York (2006)
Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems for robust classifier design in adversarial environments. Int’l J. of Machine Learning and Cybernetics 1(1), 27–41 (2010)
Biggio, B., Fumera, G., Roli, F.: Design of robust classifiers for adversarial environments. In: IEEE Int’l Conf. on Systems, Man, and Cybernetics (SMC), pp. 977–982 (2011)
Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. on Knowl. and Data Eng. 99(PrePrints), 1 (2013)
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Langford, J., Pineau, J. (eds.) 29th Int’l Conf. on Mach. Learn. (2012)
Brückner, M., Scheffer, T.: Stackelberg games for adversarial prediction problems. In: Knowl. Disc. and D. Mining (KDD), pp. 547–555 (2011)
Brückner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 13, 2617–2654 (2012)
Dalvi, N., Domingos, P., Mausam, S.S., Verma, D.: Adversarial classification. In: 10th ACM SIGKDD Int’l Conf. on Knowl. Discovery and Data Mining (KDD), pp. 99–108 (2004)
Dekel, O., Shamir, O., Xiao, L.: Learning to classify with missing and corrupted features. Mach. Learn. 81, 149–178 (2010)
Fogla, P., Sharif, M., Perdisci, R., Kolesnikov, O., Lee, W.: Polymorphic blending attacks. In: Proc. 15th Conf. on USENIX Sec. Symp. USENIX Association, CA (2006)
Globerson, A., Roweis, S.T.: Nightmare at test time: robust learning by feature deletion. In: Cohen, W.W., Moore, A. (eds.) Proc. of the 23rd Int’l Conf. on Mach. Learn., vol. 148, pp. 353–360. ACM (2006)
Golland, P.: Discriminative direction for kernel classifiers. In: Neu. Inf. Proc. Syst (NIPS), pp. 745–752 (2002)
Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B., Tygar, J.D.: Adversarial machine learning. In: 4th ACM Workshop on Art. Int. and Sec (AISec 2011), Chicago, IL, USA, pp. 43–57 (2011)
Kloft, M., Laskov, P.: Online anomaly detection under adversarial impact. In: Proc. of the 13th Int’l Conf. on Art. Int. and Stats (AISTATS), pp. 405–412 (2010)
Kolcz, A., Teo, C.H.: Feature weighting for improved classifier robustness. In: Sixth Conf. on Email and Anti-Spam (CEAS), Mountain View, CA, USA (2009)
Laskov, P., Kloft, M.: A framework for quantitative security analysis of machine learning. In: AISec 2009: Proc. of the 2nd ACM Works. on Sec. and Art. Int., pp. 1–4. ACM, New York (2009)
LeCun, Y., Jackel, L., Bottou, L., Brunot, A., Cortes, C., Denker, J., Drucker, H., Guyon, I., Müller, U., Säckinger, E., Simard, P., Vapnik, V.: Comparison of learning algorithms for handwritten digit recognition. In: Int’l Conf. on Art. Neu. Net., pp. 53–60 (1995)
Lowd, D., Meek, C.: Adversarial learning. In: Press, A. (ed.) Proc. of the Eleventh ACM SIGKDD Int’l Conf. on Knowl. Disc. and D. Mining (KDD), Chicago, IL, pp. 641–647 (2005)
Maiorca, D., Giacinto, G., Corona, I.: A pattern recognition system for malicious pdf files detection. In: MLDM, pp. 510–524 (2012)
Nelson, B., Barreno, M., Chi, F.J., Joseph, A.D., Rubinstein, B.I.P., Saini, U., Sutton, C., Tygar, J.D., Xia, K.: Exploiting machine learning to subvert your spam filter. In: LEET 2008: Proc. of the 1st USENIX Work. on L.-S. Exp. and Emerg. Threats, pp. 1–9. USENIX Association, Berkeley (2008)
Nelson, B., Rubinstein, B.I., Huang, L., Joseph, A.D., Lee, S.J., Rao, S., Tygar, J.D.: Query strategies for evading convex-inducing classifiers. J. Mach. Learn. Res. 13, 1293–1332 (2012)
Platt, J.: Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In: Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D. (eds.) Adv. in L. M. Class, pp. 61–74 (2000)
Smutz, C., Stavrou, A.: Malicious pdf detection using metadata and structural features. In: Proc. of the 28th Annual Comp. Sec. App. Conf., pp. 239–248 (2012)
Šrndić, N., Laskov, P.: Detection of malicious pdf files based on hierarchical document structure. In: Proc. 20th Annual Net. & Dist. Sys. Sec. Symp. (2013)
Young, R.: 2010 IBM X-force mid-year trend & risk report. Tech. rep., IBM (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Biggio, B. et al. (2013). Evasion Attacks against Machine Learning at Test Time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2013. Lecture Notes in Computer Science(), vol 8190. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40994-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-642-40994-3_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40993-6
Online ISBN: 978-3-642-40994-3
eBook Packages: Computer ScienceComputer Science (R0)