skip to main content
10.1145/3473856.3473869acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmundcConference Proceedingsconference-collections
research-article

“I Never Thought About Securing My Machine Learning Systems”: A Study of Security and Privacy Awareness of Machine Learning Practitioners

Published:13 September 2021Publication History

ABSTRACT

Machine learning (ML) models have become increasingly important components of many software systems. Therefore, ensuring their privacy and security is a crucial task. Current research mainly focuses on the development of security and privacy methods. However, ML practitioners, as the individuals in charge of translating the theory into practical applications, have not yet received much attention. In this paper, the security and privacy awareness and practices of ML practitioners are studied through an online survey with the aim of (1) gaining insight into the current state of awareness, (2) identifying influencing factors, and (3) exploring the actual use of existing methods and tools. The results indicate a relatively low general privacy and security awareness among the ML practitioners surveyed. In addition, they are less familiar with ML privacy protection methods than with general security methods or ML-related ones. Moreover, awareness correlates with the years of working with ML but not with the level of academic education or the field of occupation. Finally, the practitioners in this study seem to experience uncertainties in implementing legal frameworks, such as the European General Data Protection Regulation, into their ML workflows.

References

  1. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, and Zhifeng Chen et al.. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org.Google ScholarGoogle Scholar
  2. Yasemin Acar, Christian Stransky, Dominik Wermke, Michelle Mazurek, and Sascha Fahl. 2017. Security Developer Studies with GitHub Users: Exploring a Convenience Sample. In Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017) (San Antonio, Texas). USENIX Association, Santa Clara, CA, USA, 81–95.Google ScholarGoogle Scholar
  3. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. 2016. Concrete Problems in AI Safety. arXiv:1606.06565v2http://arxiv.org/abs/1606.06565Google ScholarGoogle Scholar
  4. Rebecca Balebako and Lorrie Faith Cranor. 2014. Improving App Privacy: Nudging App Developers to Protect User Privacy. IEEE Security & Privacy 12, 4 (2014), 55–58. https://doi.org/10.1109/MSP.2014.70Google ScholarGoogle ScholarCross RefCross Ref
  5. Rebecca Balebako, Abigail Marsh, Jialiu Lin, Jason Hong, and Lorrie Faith Cranor. 2014. The Privacy and Security Behaviors of Smartphone App Developers. In Proceedings 2014 Workshop on Usable Security (San Diego, CA). Internet Society, New York, NY, USA, 1–10. https://doi.org/10.14722/usec.2014.23006Google ScholarGoogle ScholarCross RefCross Ref
  6. Marco Barreno, Blaine Nelson, Anthony D. Joseph, and J. D. Tygar. 2010. The security of machine learning. Machine Learning 81, 2 (2010), 121–148. https://doi.org/10.1007/s10994-010-5188-5Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. 2006. Can machine learning be secure?. In Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, ASIACCS. ACM, New York, NY, USA, 16–25. https://doi.org/10.1145/1128817.1128824Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Maurice S Bartlett. 1951. The effect of standardization on a χ 2 approximation in factor analysis. Biometrika 38, 3/4 (1951), 337–344.Google ScholarGoogle Scholar
  9. Nathan Benaich and Ian Hogarth. 2019. State of AI Report 2019. https://www.stateof.ai/2019 last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  10. Nathan Benaich and Ian Hogarth.2020. State of AI Report 2020. https://www.stateof.ai/ last accessed on: April 7th 2021.Google ScholarGoogle Scholar
  11. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) 57, 1(1995), 289–300.Google ScholarGoogle ScholarCross RefCross Ref
  12. Patrik Berander. 2004. Using Students as Subjects in Requirements Prioritization. In Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004. ISESE ’04.IEEE, New York, NY, USA, 167–176. https://doi.org/10.1109/ISESE.2004.1334904Google ScholarGoogle ScholarCross RefCross Ref
  13. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion Attacks against Machine Learning at Test Time. In Advanced Information Systems Engineering. Vol. 7908. Springer, Berlin, Heidelberg, 387–402. https://doi.org/10.1007/978-3-642-40994-3_25Google ScholarGoogle Scholar
  14. Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks against Support Vector Machines. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012. icml.cc / Omnipress, Madison, WI, USA, 1–8. http://icml.cc/2012/papers/880.pdfGoogle ScholarGoogle Scholar
  15. Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. 2015. Machine Learning Classification over Encrypted Data. In 22nd Annual Network and Distributed System Security Symposium, NDSS. The Internet Society, Reston, Virginia, USA, 14. https://www.ndss-symposium.org/ndss2015/machine-learning-classification-over-encrypted-dataGoogle ScholarGoogle ScholarCross RefCross Ref
  16. Brendan McMahan and Daniel Ramage. 2017. Federated Learning: Collaborative Machine Learning without Centralized Training Data. http://ai.googleblog.com/2017/04/federated-learning-collaborative.html http://ai.googleblog.com/2017/04/federated-learning-collaborative.html, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  17. BSI. 2020. German IT Security Certificates. https://www.bsi.bund.de/EN/Topics/Certification/certification_node.html, last accessed on: April 7th 2021.Google ScholarGoogle Scholar
  18. Anna L Buczak and Erhan Guven. 2015. A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications surveys & tutorials 18, 2 (2015), 1153–1176.Google ScholarGoogle Scholar
  19. Peter Buneman, Sanjeev Khanna, and Tan Wang-Chiew. 2001. Why and where: A characterization of data provenance. In International conference on database theory. [], Springer, Berlin, Heidelberg, 316–330.Google ScholarGoogle ScholarCross RefCross Ref
  20. Justin Cappos, Yanyan Zhuang, Daniela Oliveira, Marissa Rosenthal, and Kuo-Chuan Yeh. 2014. Vulnerabilities as Blind Spots in Developer’s Heuristic-Based Decision-Making Processes. In Proceedings of the 2014 Workshop on New Security Paradigms Workshop - NSPW ’14 (Victoria, British Columbia, Canada). ACM Press, New York, NY, USA, 53–62. https://doi.org/10.1145/2683467.2683472Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement 20, 1 (1960), 37–46.Google ScholarGoogle Scholar
  22. Lee J Cronbach. 1951. Coefficient alpha and the internal structure of tests. psychometrika 16, 3 (1951), 297–334.Google ScholarGoogle Scholar
  23. Morten Dahl, Jason Mancuso, Yann Dupis, Ben Decoste, Morgan Giraud, Ian Livingstone, Justin Patriquin, and Gavin Uhma. 2018. Private Machine Learning in TensorFlow using Secure Computation. CoRR abs/1810.08130(2018), 6. arxiv:1810.08130http://arxiv.org/abs/1810.08130Google ScholarGoogle Scholar
  24. Thomas G Dietterich 2002. Ensemble learning. The handbook of brain theory and neural networks 2 (2002), 110–125.Google ScholarGoogle Scholar
  25. Gavin Weiguang Ding, Luyu Wang, and Xiaomeng Jin. 2019. AdverTorch v0.1: An Adversarial Robustness Toolbox based on PyTorch. https://github.com/BorealisAI/advertorch.Google ScholarGoogle Scholar
  26. Cynthia Dwork. 2006. Differential Privacy. In Automata, Languages and Programming, 33rd International Colloquium, ICALP 2006, Proceedings, Part II(Lecture Notes in Computer Science, Vol. 4052). Springer, Berlin, Heidelberg, 1–12. https://doi.org/10.1007/11787006_1Google ScholarGoogle Scholar
  27. Cynthia Dwork, Aaron Roth, 2014. The algorithmic foundations of differential privacy.Foundations and Trends in Theoretical Computer Science 9, 3-4(2014), 211–407.Google ScholarGoogle Scholar
  28. Michael P. Fay and Michael A. Proschan. 2010. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics Surveys 4, 0 (2010), 1–39. https://doi.org/10.1214/09-ss051Google ScholarGoogle ScholarCross RefCross Ref
  29. Ivan Flechais, Martina Angela Sasse, and Stephen Hailes. 2003. Bringing security home: a process for developing secure and usable systems. In Proceedings of the New Security Paradigms Workshop, Christian Hempelmann and Victor Raskin (Eds.). ACM, New York, NY, USA, 49–57. https://doi.org/10.1145/986655.986664Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Joseph L. Fleiss, Bruce Levin, and Myunghee Cho Paik. 2013. Statistical Methods for Rates and Proportions. John Wiley & Sons, Hoboken, New Jersey, USA.Google ScholarGoogle Scholar
  31. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security - CCS ’15 (Denver, Colorado, USA). ACM Press, New York, NY, USA, 1322–1333. https://doi.org/10.1145/2810103.2813677Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Galen Andrew, Steve Chien, and Nicolas Papernot. 2019. Tensorflow/Privacy. tensorflow. https://github.com/tensorflow/privacyhttps://github.com/tensorflow/privacy, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  33. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for Datasets. arxiv:1803.09010http://arxiv.org/abs/1803.09010Google ScholarGoogle Scholar
  34. Limesurvey GmbH. 2006–2021. LimeSurvey: An Open Source Survey Tool. http://www.limesurvey.org, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  35. Morgane Goibert and Elvis Dohmatob. 2019. Adversarial Robustness via Adversarial Label-Smoothing. arXiv:1906.11567http://arxiv.org/abs/1906.11567Google ScholarGoogle Scholar
  36. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. arxiv:1412.6572 [cs, stat] http://arxiv.org/abs/1412.6572Google ScholarGoogle Scholar
  37. Thore Graepel, Kristin Lauter, and Michael Naehrig. 2012. ML confidential: Machine learning on encrypted data. In International Conference on Information Security and Cryptology. Springer, Springer, Berlin, Heidelberg, 1–21.Google ScholarGoogle Scholar
  38. Bartlomiej Hanus, John C Windsor, and Yu Wu. 2018. Definition and multidimensionality of security awareness: Close encounters of the second order. ACM SIGMIS Database: the DATABASE for Advances in Information Systems 49, SI(2018), 103–133.Google ScholarGoogle Scholar
  39. Martin Höst, Björn Regnell, and Claes Wohlin. 2000. Using Students as Subjects-A Comparative Study of Students and Professionals in Lead-Time Impact Assessment. Empirical Software Engineering 5, 3 (2000), 201–214.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2011. Adversarial machine learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, AISec 2011, Chicago, IL, USA, October 21, 2011. ACM, New York, NY, USA, 43–58. https://doi.org/10.1145/2046684.2046692Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Shubham Jain and Janne Lindqvist. 2014. Should I Protect You? Understanding Developers’ Behavior to Privacy-Preserving APIs. In Proceedings 2014 Workshop on Usable Security (San Diego, CA). Internet Society, Reston, Virginia, USA, 10. https://doi.org/10.14722/usec.2014.23045Google ScholarGoogle ScholarCross RefCross Ref
  42. Xin Jin and Jiawei Han. 2010. K-Means Clustering. Springer US, Boston, MA, 563–564. https://doi.org/10.1007/978-0-387-30164-8_425Google ScholarGoogle Scholar
  43. Noah Johnson, Joseph P Near, and Dawn Song. 2018. Towards practical differential privacy for SQL queries. In Proceedings of the VLDB Endowment, Vol. 11,5. VLDB Endowment, [], 526–539. https://github.com/uber-archive/sql-differential-privacy.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Kaggle. 2020. State of Machine Learning and Data Science 2020. https://storage.googleapis.com/kaggle-media/surveys/Kaggle%20State%20of%20Machine%20Learning%20and%20Data%20Science%202020.pdf, last accessed on: April 7th 2021.Google ScholarGoogle Scholar
  45. Henry F Kaiser. 1970. A second generation little jiffy. Psychometrika 35, 4 (1970), 401–415.Google ScholarGoogle ScholarCross RefCross Ref
  46. Henry F Kaiser and John Rice. 1974. Little jiffy, mark IV. Educational and psychological measurement 34, 1 (1974), 111–117.Google ScholarGoogle Scholar
  47. Trupti M Kodinariya and Prashant R Makwana. 2013. Review on determining number of Cluster in K-Means Clustering. International Journal 1, 6 (2013), 90–95.Google ScholarGoogle Scholar
  48. Jakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv:1610.05492https://arxiv.org/abs/1610.05492Google ScholarGoogle Scholar
  49. William H. Kruskal and W. Allen Wallis. 1952. Use of Ranks in One-Criterion Variance Analysis. J. Amer. Statist. Assoc. 47, 260 (1952), 583–621. http://www.jstor.org/stable/2280779Google ScholarGoogle ScholarCross RefCross Ref
  50. Ram Shankar Siva Kumar, Magnus Nyström, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, and Sharon Xia. 2020. Adversarial machine learning-industry perspectives. In 2020 IEEE Security and Privacy Workshops (SPW). IEEE, IEEE, New York, NY, USA, 69–75.Google ScholarGoogle ScholarCross RefCross Ref
  51. Yehuda Lindell and Benny Pinkas. 2009. Secure Multiparty Computation for Privacy-Preserving Data Mining. J. Priv. Confidentiality 1, 1 (2009), 40. https://doi.org/10.29012/jpc.v1i1.566Google ScholarGoogle Scholar
  52. Qiang Liu, Pan Li, Wentao Zhao, Wei Cai, Shui Yu, and Victor C. M. Leung. 2018. A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View. IEEE Access 6(2018), 12103–12117. https://doi.org/10.1109/ACCESS.2018.2805680Google ScholarGoogle ScholarCross RefCross Ref
  53. Ben Lorica Loukides, Mike. 2019. You Created a Machine Learning Application. Now Make Sure It’s Secure.https://www.oreilly.com/ideas/you-created-a-machine-learning-application-now-make-sure-its-secure, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  54. H. Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, and Peter Kairouz. 2019. A General Approach to Adding Differential Privacy to Iterative Training Procedures. arxiv:arXiv:1812.06210 [cs.LG] https://github.com/tensorflow/privacy.Google ScholarGoogle Scholar
  55. Microsoft Research, Redmond, WA.2019. Microsoft SEAL (Release 3.4). Microsoft. https://github.com/Microsoft/SEALhttps://github.com/Microsoft/SEAL, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  56. Christoph Molnar. 2019. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/ https://christophm.github.io/interpretable-ml-book/, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  57. Sarah Nadi, Stefan Krüger, Mira Mezini, and Eric Bodden. 2016. Jumping through Hoops: Why Do Java Developers Struggle with Cryptography APIs?. In Proceedings of the 38th International Conference on Software Engineering - ICSE ’16 (Austin, Texas). ACM Press, New York, NY, USA, 935–946. https://doi.org/10.1145/2884781.2884790Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Yuki Nagai, Yusuke Uchida, Shigeyuki Sakazawa, and Shin’ichi Satoh. 2018. Digital watermarking for deep neural networks. International Journal of Multimedia Information Retrieval 7, 1(2018), 3–16.Google ScholarGoogle ScholarCross RefCross Ref
  59. Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, Marco Herzog, Sergej Dechand, and Matthew Smith. 2017. Why Do Developers Get Password Storage Wrong?: A Qualitative Usability Study. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security - CCS ’17 (Dallas, Texas, USA). ACM Press, New York, NY, USA, 311–328. https://doi.org/10.1145/3133956.3134082Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, and Matthew Smith. 2018. Deception Task Design in Developer Password Studies: Exploring a Student Sample. In Fourteenth Symposium on Usable Privacy and Security, SOUPS 2018, Baltimore, MD, USA, August 12-14, 2018, Mary Ellen Zurko and Heather Richter Lipford (Eds.). USENIX Association, Berkeley, California, USA, 297–313. https://www.usenix.org/conference/soups2018/presentation/naiakshinaGoogle ScholarGoogle Scholar
  61. Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, and Ambrish Rawat et al.. 2019. Adversarial Robustness Toolbox v1.0.0. arxiv:1807.01069 [cs.LG] https://github.com/Trusted-AI/adversarial-robustness-toolbox.Google ScholarGoogle Scholar
  62. Office of the Privacy Commissioner of Canada. 2012. Seizing Opportunity: Good Privacy Practices for Developing Mobile Apps. https://www.priv.gc.ca/en/privacy-topics/technology/mobile-and-digital-devices/mobile-apps/gd_app_201210/ https://www.priv.gc.ca/en/privacy-topics/technology/mobile-and-digital-devices/mobile-apps/gd_app_201210/, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  63. Office of the Privacy Commissioner of Canada. 2019. The Personal Information Protection and Electronic Documents Act (PIPEDA). https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  64. Office of the Australian Information Commissioner (OAIC). 2014. Mobile Privacy: A Better Practice Guide for Mobile App Developers. https://www.oaic.gov.au/privacy/guidance-and-advice/mobile-privacy-a-better-practice-guide-for-mobile-app-developers/, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  65. Stanley RM Oliveira and Osmar R Zaiane. 2003. Protecting sensitive knowledge by data sanitization. In Third IEEE International conference on data mining. IEEE, IEEE, New York, NY, USA, 613–616.Google ScholarGoogle ScholarCross RefCross Ref
  66. Nicolas Papernot. 2018. A Marauder’s Map of Security and Privacy in Machine Learning: An overview of current and future research directions for making machine learning secure and private. In Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, CCS. ACM, New York, NY, USA, 1. https://doi.org/10.1145/3270101.3270102Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, and Reuben Feinman et al.. 2018. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv:arXiv:1610.007682https://github.com/cleverhans-lab/cleverhans.Google ScholarGoogle Scholar
  68. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael P Wellman. 2018. Sok: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, IEEE, New York, NY, USA, 399–414.Google ScholarGoogle ScholarCross RefCross Ref
  69. Nicolas Papernot and Patrick D. McDaniel. 2018. Deep K-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. arxiv:1803.04765http://arxiv.org/abs/1803.04765Google ScholarGoogle Scholar
  70. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, and Thirion et al.. 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825–2830.Google ScholarGoogle Scholar
  71. Olgierd Pieczul, Simon Foley, and Mary Ellen Zurko. 2017. Developer-Centered Security and the Symmetry of Ignorance. In Proceedings of the 2017 New Security Paradigms Workshop on ZZZ - NSPW 2017 (Santa Cruz, CA, USA). ACM Press, New York, NY, USA, 46–56. https://doi.org/10.1145/3171533.3171539Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Jonas Rauber, Roland Zimmermann, Matthias Bethge, and Wieland Brendel. 2020. Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. In Journal of Open Source Software, Vol. 5, 53. The Open Journal, [], 2607. https://doi.org/10.21105/joss.02607 https://github.com/bethgelab/foolbox.Google ScholarGoogle Scholar
  73. Theo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Mancuso, Daniel Rueckert, and Jonathan Passerat-Palmbach. 2018. A generic framework for privacy preserving deep learning. arxiv:1811.04017 [cs.LG] https://github.com/OpenMined/PySyft.Google ScholarGoogle Scholar
  74. Iflaah Salman, Ayse Tosun Misirli, and Natalia Juristo. 2015. Are Students Representatives of Professionals in Software Engineering Experiments?. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, New York, NY, USA, 666–676. https://doi.org/10.1109/ICSE.2015.82Google ScholarGoogle ScholarCross RefCross Ref
  75. Ravi S Sandhu and Pierangela Samarati. 1994. Access control: principle and practice. IEEE communications magazine 32, 9 (1994), 40–48.Google ScholarGoogle Scholar
  76. Educational Testing Service. 2019. factor_analyzer: Open source Python module to perform exploratory and factor analysis. https://factor-analyzer.readthedocs.io/en/latest/index.html/.Google ScholarGoogle Scholar
  77. Yi Shi, Yalin E Sagduyu, Kemal Davaslioglu, and Jason H Li. 2018. Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST). IEEE, IEEE, New York, NY, USA, 1–6.Google ScholarGoogle ScholarCross RefCross Ref
  78. Justin Smith, Brittany Johnson, Emerson Murphy-Hill, Bill Chu, and Heather Richter Lipford. 2015. Questions Developers Ask While Diagnosing Potential Security Vulnerabilities with Static Analysis. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering - ESEC/FSE 2015(Bergamo, Italy). ACM Press, New York, NY, USA, 248–259. https://doi.org/10.1145/2786805.2786812Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Mark Stamp. 2011. Information security: principles and practice. John Wiley & Sons, Hoboken, New Jersey, USA.Google ScholarGoogle Scholar
  80. State of California Department of Justice. 2018. California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa https://oag.ca.gov/privacy/ccpa, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  81. Jeffrey Stylos and Brad A. Myers. 2008. The implications of method placement on API learnability. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2008, Atlanta, Georgia, USA, November 9-14, 2008, Mary Jean Harrold and Gail C. Murphy (Eds.). ACM, New York, NY, USA, 105–112. https://doi.org/10.1145/1453101.1453117Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Vinith M Suriyakumar, Nicolas Papernot, Anna Goldenberg, and Marzyeh Ghassemi. 2021. Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 723–734.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Mikael Svahnberg, Aybüke Aurum, and Claes Wohlin. 2008. Using students as subjects - an empirical evaluation. In Proceedings of the Second International Symposium on Empirical Software Engineering and Measurement, ESEM, H. Dieter Rombach, Sebastian G. Elbaum, and Jürgen Münch (Eds.). ACM, New York, NY, USA, 288–290. https://doi.org/10.1145/1414004.1414055Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin’ichi Satoh. 2017. Embedding Watermarks into Deep Neural Networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, ICMR, Bogdan Ionescu, Nicu Sebe, Jiashi Feng, Martha A. Larson, Rainer Lienhart, and Cees Snoek(Eds.). ACM, New York, NY, USA, 269–277. https://doi.org/10.1145/3078971.3078974Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Guido Van Rossum and Fred L. Drake. 2009. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA.Google ScholarGoogle Scholar
  86. Dinusha Vatsalan, Peter Christen, and Vassilios S. Verykios. 2013. A taxonomy of privacy-preserving record linkage techniques. Inf. Syst. 38, 6 (2013), 946–969. https://doi.org/10.1016/j.is.2012.11.005Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Dinusha Vatsalan, Peter Christen, and Vassilios S Verykios. 2013. A taxonomy of privacy-preserving record linkage techniques. Information Systems 38, 6 (2013), 946–969.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Michael Veale, Reuben Binns, and Lilian Edwards. 2018. Algorithms that Remember: Model Inversion Attacks and Data Protection Law. CoRR abs/1807.04644(2018), 15. arxiv:1807.04644http://arxiv.org/abs/1807.04644Google ScholarGoogle Scholar
  89. Denis Verdon. 2006. Security policies and the software developer. IEEE Security & Privacy 4, 4 (2006), 42–49. https://doi.org/10.1109/MSP.2006.103Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, and SciPy 1.0 Contributors et al.. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17(2020), 261–272. https://doi.org/10.1038/s41592-019-0686-2Google ScholarGoogle ScholarCross RefCross Ref
  91. Paul Voigt and Axel von dem Bussche. 2017. The EU General Data Protection Regulation (GDPR): A Practical Guide. Springer, Berlin, Heidelberg.Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. Christina Voskoglou. 2017. What is the best programming language for Machine Learning?https://towardsdatascience.com/what-is-the-best-programming-language-for-machine-learning-a745c156d6b7, last accessed on: April 7th 2021.Google ScholarGoogle Scholar
  93. Christopher Waites. 2019. PyVacy: Towards Practical Differential Privacy for Deep Learning. http://hdl.handle.net/1853/61412, last accessed on: Feb. 2nd 2021.Google ScholarGoogle Scholar
  94. Royce J Wilson, Celia Yuxin Zhang, William Lam, Damien Desfontaines, Daniel Simmons-Marengo, and Bryant Gipson. 2019. Differentially Private SQL with Bounded User Contribution. arxiv:1909.01917 [cs.CR] https://github.com/google/differential-privacy.Google ScholarGoogle Scholar
  95. Glenn Wurster and P. C. van Oorschot. 2008. The Developer Is the Enemy. In Proceedings of the 2008 Workshop on New Security Paradigms - NSPW ’08 (Lake Tahoe, California, USA). ACM Press, New York, NY, USA, 89. https://doi.org/10.1145/1595676.1595691Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. P.J Zarco-Tejada, C.A Rueda, and S.L Ustin. 2003. Water Content Estimation in Vegetation with MODIS Reflectance Data and Model Inversion Methods. Remote Sensing of Environment 85, 1 (2003), 109–124. https://doi.org/10.1016/S0034-4257(02)00197-9Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    MuC '21: Proceedings of Mensch und Computer 2021
    September 2021
    613 pages
    ISBN:9781450386456
    DOI:10.1145/3473856

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 September 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format