ABSTRACT
Machine learning (ML) models have become increasingly important components of many software systems. Therefore, ensuring their privacy and security is a crucial task. Current research mainly focuses on the development of security and privacy methods. However, ML practitioners, as the individuals in charge of translating the theory into practical applications, have not yet received much attention. In this paper, the security and privacy awareness and practices of ML practitioners are studied through an online survey with the aim of (1) gaining insight into the current state of awareness, (2) identifying influencing factors, and (3) exploring the actual use of existing methods and tools. The results indicate a relatively low general privacy and security awareness among the ML practitioners surveyed. In addition, they are less familiar with ML privacy protection methods than with general security methods or ML-related ones. Moreover, awareness correlates with the years of working with ML but not with the level of academic education or the field of occupation. Finally, the practitioners in this study seem to experience uncertainties in implementing legal frameworks, such as the European General Data Protection Regulation, into their ML workflows.
- Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, and Zhifeng Chen et al.. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org.Google Scholar
- Yasemin Acar, Christian Stransky, Dominik Wermke, Michelle Mazurek, and Sascha Fahl. 2017. Security Developer Studies with GitHub Users: Exploring a Convenience Sample. In Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017) (San Antonio, Texas). USENIX Association, Santa Clara, CA, USA, 81–95.Google Scholar
- Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. 2016. Concrete Problems in AI Safety. arXiv:1606.06565v2http://arxiv.org/abs/1606.06565Google Scholar
- Rebecca Balebako and Lorrie Faith Cranor. 2014. Improving App Privacy: Nudging App Developers to Protect User Privacy. IEEE Security & Privacy 12, 4 (2014), 55–58. https://doi.org/10.1109/MSP.2014.70Google ScholarCross Ref
- Rebecca Balebako, Abigail Marsh, Jialiu Lin, Jason Hong, and Lorrie Faith Cranor. 2014. The Privacy and Security Behaviors of Smartphone App Developers. In Proceedings 2014 Workshop on Usable Security (San Diego, CA). Internet Society, New York, NY, USA, 1–10. https://doi.org/10.14722/usec.2014.23006Google ScholarCross Ref
- Marco Barreno, Blaine Nelson, Anthony D. Joseph, and J. D. Tygar. 2010. The security of machine learning. Machine Learning 81, 2 (2010), 121–148. https://doi.org/10.1007/s10994-010-5188-5Google ScholarDigital Library
- Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. 2006. Can machine learning be secure?. In Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, ASIACCS. ACM, New York, NY, USA, 16–25. https://doi.org/10.1145/1128817.1128824Google ScholarDigital Library
- Maurice S Bartlett. 1951. The effect of standardization on a χ 2 approximation in factor analysis. Biometrika 38, 3/4 (1951), 337–344.Google Scholar
- Nathan Benaich and Ian Hogarth. 2019. State of AI Report 2019. https://www.stateof.ai/2019 last accessed on: Feb. 2nd 2021.Google Scholar
- Nathan Benaich and Ian Hogarth.2020. State of AI Report 2020. https://www.stateof.ai/ last accessed on: April 7th 2021.Google Scholar
- Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) 57, 1(1995), 289–300.Google ScholarCross Ref
- Patrik Berander. 2004. Using Students as Subjects in Requirements Prioritization. In Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004. ISESE ’04.IEEE, New York, NY, USA, 167–176. https://doi.org/10.1109/ISESE.2004.1334904Google ScholarCross Ref
- Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion Attacks against Machine Learning at Test Time. In Advanced Information Systems Engineering. Vol. 7908. Springer, Berlin, Heidelberg, 387–402. https://doi.org/10.1007/978-3-642-40994-3_25Google Scholar
- Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks against Support Vector Machines. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012. icml.cc / Omnipress, Madison, WI, USA, 1–8. http://icml.cc/2012/papers/880.pdfGoogle Scholar
- Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. 2015. Machine Learning Classification over Encrypted Data. In 22nd Annual Network and Distributed System Security Symposium, NDSS. The Internet Society, Reston, Virginia, USA, 14. https://www.ndss-symposium.org/ndss2015/machine-learning-classification-over-encrypted-dataGoogle ScholarCross Ref
- Brendan McMahan and Daniel Ramage. 2017. Federated Learning: Collaborative Machine Learning without Centralized Training Data. http://ai.googleblog.com/2017/04/federated-learning-collaborative.html http://ai.googleblog.com/2017/04/federated-learning-collaborative.html, last accessed on: Feb. 2nd 2021.Google Scholar
- BSI. 2020. German IT Security Certificates. https://www.bsi.bund.de/EN/Topics/Certification/certification_node.html, last accessed on: April 7th 2021.Google Scholar
- Anna L Buczak and Erhan Guven. 2015. A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications surveys & tutorials 18, 2 (2015), 1153–1176.Google Scholar
- Peter Buneman, Sanjeev Khanna, and Tan Wang-Chiew. 2001. Why and where: A characterization of data provenance. In International conference on database theory. [], Springer, Berlin, Heidelberg, 316–330.Google ScholarCross Ref
- Justin Cappos, Yanyan Zhuang, Daniela Oliveira, Marissa Rosenthal, and Kuo-Chuan Yeh. 2014. Vulnerabilities as Blind Spots in Developer’s Heuristic-Based Decision-Making Processes. In Proceedings of the 2014 Workshop on New Security Paradigms Workshop - NSPW ’14 (Victoria, British Columbia, Canada). ACM Press, New York, NY, USA, 53–62. https://doi.org/10.1145/2683467.2683472Google ScholarDigital Library
- Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement 20, 1 (1960), 37–46.Google Scholar
- Lee J Cronbach. 1951. Coefficient alpha and the internal structure of tests. psychometrika 16, 3 (1951), 297–334.Google Scholar
- Morten Dahl, Jason Mancuso, Yann Dupis, Ben Decoste, Morgan Giraud, Ian Livingstone, Justin Patriquin, and Gavin Uhma. 2018. Private Machine Learning in TensorFlow using Secure Computation. CoRR abs/1810.08130(2018), 6. arxiv:1810.08130http://arxiv.org/abs/1810.08130Google Scholar
- Thomas G Dietterich 2002. Ensemble learning. The handbook of brain theory and neural networks 2 (2002), 110–125.Google Scholar
- Gavin Weiguang Ding, Luyu Wang, and Xiaomeng Jin. 2019. AdverTorch v0.1: An Adversarial Robustness Toolbox based on PyTorch. https://github.com/BorealisAI/advertorch.Google Scholar
- Cynthia Dwork. 2006. Differential Privacy. In Automata, Languages and Programming, 33rd International Colloquium, ICALP 2006, Proceedings, Part II(Lecture Notes in Computer Science, Vol. 4052). Springer, Berlin, Heidelberg, 1–12. https://doi.org/10.1007/11787006_1Google Scholar
- Cynthia Dwork, Aaron Roth, 2014. The algorithmic foundations of differential privacy.Foundations and Trends in Theoretical Computer Science 9, 3-4(2014), 211–407.Google Scholar
- Michael P. Fay and Michael A. Proschan. 2010. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics Surveys 4, 0 (2010), 1–39. https://doi.org/10.1214/09-ss051Google ScholarCross Ref
- Ivan Flechais, Martina Angela Sasse, and Stephen Hailes. 2003. Bringing security home: a process for developing secure and usable systems. In Proceedings of the New Security Paradigms Workshop, Christian Hempelmann and Victor Raskin (Eds.). ACM, New York, NY, USA, 49–57. https://doi.org/10.1145/986655.986664Google ScholarDigital Library
- Joseph L. Fleiss, Bruce Levin, and Myunghee Cho Paik. 2013. Statistical Methods for Rates and Proportions. John Wiley & Sons, Hoboken, New Jersey, USA.Google Scholar
- Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security - CCS ’15 (Denver, Colorado, USA). ACM Press, New York, NY, USA, 1322–1333. https://doi.org/10.1145/2810103.2813677Google ScholarDigital Library
- Galen Andrew, Steve Chien, and Nicolas Papernot. 2019. Tensorflow/Privacy. tensorflow. https://github.com/tensorflow/privacyhttps://github.com/tensorflow/privacy, last accessed on: Feb. 2nd 2021.Google Scholar
- Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for Datasets. arxiv:1803.09010http://arxiv.org/abs/1803.09010Google Scholar
- Limesurvey GmbH. 2006–2021. LimeSurvey: An Open Source Survey Tool. http://www.limesurvey.org, last accessed on: Feb. 2nd 2021.Google Scholar
- Morgane Goibert and Elvis Dohmatob. 2019. Adversarial Robustness via Adversarial Label-Smoothing. arXiv:1906.11567http://arxiv.org/abs/1906.11567Google Scholar
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. arxiv:1412.6572 [cs, stat] http://arxiv.org/abs/1412.6572Google Scholar
- Thore Graepel, Kristin Lauter, and Michael Naehrig. 2012. ML confidential: Machine learning on encrypted data. In International Conference on Information Security and Cryptology. Springer, Springer, Berlin, Heidelberg, 1–21.Google Scholar
- Bartlomiej Hanus, John C Windsor, and Yu Wu. 2018. Definition and multidimensionality of security awareness: Close encounters of the second order. ACM SIGMIS Database: the DATABASE for Advances in Information Systems 49, SI(2018), 103–133.Google Scholar
- Martin Höst, Björn Regnell, and Claes Wohlin. 2000. Using Students as Subjects-A Comparative Study of Students and Professionals in Lead-Time Impact Assessment. Empirical Software Engineering 5, 3 (2000), 201–214.Google ScholarDigital Library
- Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2011. Adversarial machine learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, AISec 2011, Chicago, IL, USA, October 21, 2011. ACM, New York, NY, USA, 43–58. https://doi.org/10.1145/2046684.2046692Google ScholarDigital Library
- Shubham Jain and Janne Lindqvist. 2014. Should I Protect You? Understanding Developers’ Behavior to Privacy-Preserving APIs. In Proceedings 2014 Workshop on Usable Security (San Diego, CA). Internet Society, Reston, Virginia, USA, 10. https://doi.org/10.14722/usec.2014.23045Google ScholarCross Ref
- Xin Jin and Jiawei Han. 2010. K-Means Clustering. Springer US, Boston, MA, 563–564. https://doi.org/10.1007/978-0-387-30164-8_425Google Scholar
- Noah Johnson, Joseph P Near, and Dawn Song. 2018. Towards practical differential privacy for SQL queries. In Proceedings of the VLDB Endowment, Vol. 11,5. VLDB Endowment, [], 526–539. https://github.com/uber-archive/sql-differential-privacy.Google ScholarDigital Library
- Kaggle. 2020. State of Machine Learning and Data Science 2020. https://storage.googleapis.com/kaggle-media/surveys/Kaggle%20State%20of%20Machine%20Learning%20and%20Data%20Science%202020.pdf, last accessed on: April 7th 2021.Google Scholar
- Henry F Kaiser. 1970. A second generation little jiffy. Psychometrika 35, 4 (1970), 401–415.Google ScholarCross Ref
- Henry F Kaiser and John Rice. 1974. Little jiffy, mark IV. Educational and psychological measurement 34, 1 (1974), 111–117.Google Scholar
- Trupti M Kodinariya and Prashant R Makwana. 2013. Review on determining number of Cluster in K-Means Clustering. International Journal 1, 6 (2013), 90–95.Google Scholar
- Jakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv:1610.05492https://arxiv.org/abs/1610.05492Google Scholar
- William H. Kruskal and W. Allen Wallis. 1952. Use of Ranks in One-Criterion Variance Analysis. J. Amer. Statist. Assoc. 47, 260 (1952), 583–621. http://www.jstor.org/stable/2280779Google ScholarCross Ref
- Ram Shankar Siva Kumar, Magnus Nyström, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, and Sharon Xia. 2020. Adversarial machine learning-industry perspectives. In 2020 IEEE Security and Privacy Workshops (SPW). IEEE, IEEE, New York, NY, USA, 69–75.Google ScholarCross Ref
- Yehuda Lindell and Benny Pinkas. 2009. Secure Multiparty Computation for Privacy-Preserving Data Mining. J. Priv. Confidentiality 1, 1 (2009), 40. https://doi.org/10.29012/jpc.v1i1.566Google Scholar
- Qiang Liu, Pan Li, Wentao Zhao, Wei Cai, Shui Yu, and Victor C. M. Leung. 2018. A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View. IEEE Access 6(2018), 12103–12117. https://doi.org/10.1109/ACCESS.2018.2805680Google ScholarCross Ref
- Ben Lorica Loukides, Mike. 2019. You Created a Machine Learning Application. Now Make Sure It’s Secure.https://www.oreilly.com/ideas/you-created-a-machine-learning-application-now-make-sure-its-secure, last accessed on: Feb. 2nd 2021.Google Scholar
- H. Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, and Peter Kairouz. 2019. A General Approach to Adding Differential Privacy to Iterative Training Procedures. arxiv:arXiv:1812.06210 [cs.LG] https://github.com/tensorflow/privacy.Google Scholar
- Microsoft Research, Redmond, WA.2019. Microsoft SEAL (Release 3.4). Microsoft. https://github.com/Microsoft/SEALhttps://github.com/Microsoft/SEAL, last accessed on: Feb. 2nd 2021.Google Scholar
- Christoph Molnar. 2019. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/ https://christophm.github.io/interpretable-ml-book/, last accessed on: Feb. 2nd 2021.Google Scholar
- Sarah Nadi, Stefan Krüger, Mira Mezini, and Eric Bodden. 2016. Jumping through Hoops: Why Do Java Developers Struggle with Cryptography APIs?. In Proceedings of the 38th International Conference on Software Engineering - ICSE ’16 (Austin, Texas). ACM Press, New York, NY, USA, 935–946. https://doi.org/10.1145/2884781.2884790Google ScholarDigital Library
- Yuki Nagai, Yusuke Uchida, Shigeyuki Sakazawa, and Shin’ichi Satoh. 2018. Digital watermarking for deep neural networks. International Journal of Multimedia Information Retrieval 7, 1(2018), 3–16.Google ScholarCross Ref
- Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, Marco Herzog, Sergej Dechand, and Matthew Smith. 2017. Why Do Developers Get Password Storage Wrong?: A Qualitative Usability Study. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security - CCS ’17 (Dallas, Texas, USA). ACM Press, New York, NY, USA, 311–328. https://doi.org/10.1145/3133956.3134082Google ScholarDigital Library
- Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, and Matthew Smith. 2018. Deception Task Design in Developer Password Studies: Exploring a Student Sample. In Fourteenth Symposium on Usable Privacy and Security, SOUPS 2018, Baltimore, MD, USA, August 12-14, 2018, Mary Ellen Zurko and Heather Richter Lipford (Eds.). USENIX Association, Berkeley, California, USA, 297–313. https://www.usenix.org/conference/soups2018/presentation/naiakshinaGoogle Scholar
- Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, and Ambrish Rawat et al.. 2019. Adversarial Robustness Toolbox v1.0.0. arxiv:1807.01069 [cs.LG] https://github.com/Trusted-AI/adversarial-robustness-toolbox.Google Scholar
- Office of the Privacy Commissioner of Canada. 2012. Seizing Opportunity: Good Privacy Practices for Developing Mobile Apps. https://www.priv.gc.ca/en/privacy-topics/technology/mobile-and-digital-devices/mobile-apps/gd_app_201210/ https://www.priv.gc.ca/en/privacy-topics/technology/mobile-and-digital-devices/mobile-apps/gd_app_201210/, last accessed on: Feb. 2nd 2021.Google Scholar
- Office of the Privacy Commissioner of Canada. 2019. The Personal Information Protection and Electronic Documents Act (PIPEDA). https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/, last accessed on: Feb. 2nd 2021.Google Scholar
- Office of the Australian Information Commissioner (OAIC). 2014. Mobile Privacy: A Better Practice Guide for Mobile App Developers. https://www.oaic.gov.au/privacy/guidance-and-advice/mobile-privacy-a-better-practice-guide-for-mobile-app-developers/, last accessed on: Feb. 2nd 2021.Google Scholar
- Stanley RM Oliveira and Osmar R Zaiane. 2003. Protecting sensitive knowledge by data sanitization. In Third IEEE International conference on data mining. IEEE, IEEE, New York, NY, USA, 613–616.Google ScholarCross Ref
- Nicolas Papernot. 2018. A Marauder’s Map of Security and Privacy in Machine Learning: An overview of current and future research directions for making machine learning secure and private. In Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, CCS. ACM, New York, NY, USA, 1. https://doi.org/10.1145/3270101.3270102Google ScholarDigital Library
- Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, and Reuben Feinman et al.. 2018. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv:arXiv:1610.007682https://github.com/cleverhans-lab/cleverhans.Google Scholar
- Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael P Wellman. 2018. Sok: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, IEEE, New York, NY, USA, 399–414.Google ScholarCross Ref
- Nicolas Papernot and Patrick D. McDaniel. 2018. Deep K-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. arxiv:1803.04765http://arxiv.org/abs/1803.04765Google Scholar
- Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, and Thirion et al.. 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825–2830.Google Scholar
- Olgierd Pieczul, Simon Foley, and Mary Ellen Zurko. 2017. Developer-Centered Security and the Symmetry of Ignorance. In Proceedings of the 2017 New Security Paradigms Workshop on ZZZ - NSPW 2017 (Santa Cruz, CA, USA). ACM Press, New York, NY, USA, 46–56. https://doi.org/10.1145/3171533.3171539Google ScholarDigital Library
- Jonas Rauber, Roland Zimmermann, Matthias Bethge, and Wieland Brendel. 2020. Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. In Journal of Open Source Software, Vol. 5, 53. The Open Journal, [], 2607. https://doi.org/10.21105/joss.02607 https://github.com/bethgelab/foolbox.Google Scholar
- Theo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Mancuso, Daniel Rueckert, and Jonathan Passerat-Palmbach. 2018. A generic framework for privacy preserving deep learning. arxiv:1811.04017 [cs.LG] https://github.com/OpenMined/PySyft.Google Scholar
- Iflaah Salman, Ayse Tosun Misirli, and Natalia Juristo. 2015. Are Students Representatives of Professionals in Software Engineering Experiments?. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, New York, NY, USA, 666–676. https://doi.org/10.1109/ICSE.2015.82Google ScholarCross Ref
- Ravi S Sandhu and Pierangela Samarati. 1994. Access control: principle and practice. IEEE communications magazine 32, 9 (1994), 40–48.Google Scholar
- Educational Testing Service. 2019. factor_analyzer: Open source Python module to perform exploratory and factor analysis. https://factor-analyzer.readthedocs.io/en/latest/index.html/.Google Scholar
- Yi Shi, Yalin E Sagduyu, Kemal Davaslioglu, and Jason H Li. 2018. Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST). IEEE, IEEE, New York, NY, USA, 1–6.Google ScholarCross Ref
- Justin Smith, Brittany Johnson, Emerson Murphy-Hill, Bill Chu, and Heather Richter Lipford. 2015. Questions Developers Ask While Diagnosing Potential Security Vulnerabilities with Static Analysis. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering - ESEC/FSE 2015(Bergamo, Italy). ACM Press, New York, NY, USA, 248–259. https://doi.org/10.1145/2786805.2786812Google ScholarDigital Library
- Mark Stamp. 2011. Information security: principles and practice. John Wiley & Sons, Hoboken, New Jersey, USA.Google Scholar
- State of California Department of Justice. 2018. California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa https://oag.ca.gov/privacy/ccpa, last accessed on: Feb. 2nd 2021.Google Scholar
- Jeffrey Stylos and Brad A. Myers. 2008. The implications of method placement on API learnability. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2008, Atlanta, Georgia, USA, November 9-14, 2008, Mary Jean Harrold and Gail C. Murphy (Eds.). ACM, New York, NY, USA, 105–112. https://doi.org/10.1145/1453101.1453117Google ScholarDigital Library
- Vinith M Suriyakumar, Nicolas Papernot, Anna Goldenberg, and Marzyeh Ghassemi. 2021. Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 723–734.Google ScholarDigital Library
- Mikael Svahnberg, Aybüke Aurum, and Claes Wohlin. 2008. Using students as subjects - an empirical evaluation. In Proceedings of the Second International Symposium on Empirical Software Engineering and Measurement, ESEM, H. Dieter Rombach, Sebastian G. Elbaum, and Jürgen Münch (Eds.). ACM, New York, NY, USA, 288–290. https://doi.org/10.1145/1414004.1414055Google ScholarDigital Library
- Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin’ichi Satoh. 2017. Embedding Watermarks into Deep Neural Networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, ICMR, Bogdan Ionescu, Nicu Sebe, Jiashi Feng, Martha A. Larson, Rainer Lienhart, and Cees Snoek(Eds.). ACM, New York, NY, USA, 269–277. https://doi.org/10.1145/3078971.3078974Google ScholarDigital Library
- Guido Van Rossum and Fred L. Drake. 2009. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA.Google Scholar
- Dinusha Vatsalan, Peter Christen, and Vassilios S. Verykios. 2013. A taxonomy of privacy-preserving record linkage techniques. Inf. Syst. 38, 6 (2013), 946–969. https://doi.org/10.1016/j.is.2012.11.005Google ScholarDigital Library
- Dinusha Vatsalan, Peter Christen, and Vassilios S Verykios. 2013. A taxonomy of privacy-preserving record linkage techniques. Information Systems 38, 6 (2013), 946–969.Google ScholarDigital Library
- Michael Veale, Reuben Binns, and Lilian Edwards. 2018. Algorithms that Remember: Model Inversion Attacks and Data Protection Law. CoRR abs/1807.04644(2018), 15. arxiv:1807.04644http://arxiv.org/abs/1807.04644Google Scholar
- Denis Verdon. 2006. Security policies and the software developer. IEEE Security & Privacy 4, 4 (2006), 42–49. https://doi.org/10.1109/MSP.2006.103Google ScholarDigital Library
- Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, and SciPy 1.0 Contributors et al.. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17(2020), 261–272. https://doi.org/10.1038/s41592-019-0686-2Google ScholarCross Ref
- Paul Voigt and Axel von dem Bussche. 2017. The EU General Data Protection Regulation (GDPR): A Practical Guide. Springer, Berlin, Heidelberg.Google ScholarDigital Library
- Christina Voskoglou. 2017. What is the best programming language for Machine Learning?https://towardsdatascience.com/what-is-the-best-programming-language-for-machine-learning-a745c156d6b7, last accessed on: April 7th 2021.Google Scholar
- Christopher Waites. 2019. PyVacy: Towards Practical Differential Privacy for Deep Learning. http://hdl.handle.net/1853/61412, last accessed on: Feb. 2nd 2021.Google Scholar
- Royce J Wilson, Celia Yuxin Zhang, William Lam, Damien Desfontaines, Daniel Simmons-Marengo, and Bryant Gipson. 2019. Differentially Private SQL with Bounded User Contribution. arxiv:1909.01917 [cs.CR] https://github.com/google/differential-privacy.Google Scholar
- Glenn Wurster and P. C. van Oorschot. 2008. The Developer Is the Enemy. In Proceedings of the 2008 Workshop on New Security Paradigms - NSPW ’08 (Lake Tahoe, California, USA). ACM Press, New York, NY, USA, 89. https://doi.org/10.1145/1595676.1595691Google ScholarDigital Library
- P.J Zarco-Tejada, C.A Rueda, and S.L Ustin. 2003. Water Content Estimation in Vegetation with MODIS Reflectance Data and Model Inversion Methods. Remote Sensing of Environment 85, 1 (2003), 109–124. https://doi.org/10.1016/S0034-4257(02)00197-9Google ScholarCross Ref
Recommendations
When Machine Learning Meets Privacy: A Survey and Outlook
The newly emerged machine learning (e.g., deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial technology, and surveillance systems. Meanwhile, privacy has emerged as ...
A Marauder's Map of Security and Privacy in Machine Learning: An overview of current and future research directions for making machine learning secure and private
AISec '18: Proceedings of the 11th ACM Workshop on Artificial Intelligence and SecurityThere is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited but expanding. In ...
Achieving security and privacy in federated learning systems: Survey, research challenges and future directions
AbstractFederated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients that privately store their own training data. In contrast with centralized ML approaches, FL saves computation to the ...
Highlights- We survey privacy and security attacks to federated learning and mitigation measures.
Comments