Abstract
Attacks that aim to identify the training data of neural networks represent a severe threat to the privacy of individuals in the training dataset. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy, and need to select meaningful privacy parameters \(\epsilon \). A comparison of local and central differential privacy based on the privacy parameters furthermore potentially leads data scientists to incorrect conclusions, since the privacy parameters are reflecting different types of mechanisms.
Instead, we empirically compare the relative privacy-accuracy trade-off of one central and two local differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects a lower bound on inference risk and differential privacy formulates an upper bound, our experiments with several datasets show that the privacy-accuracy trade-off is similar for both types of mechanisms despite the large difference in their upper bound. This suggests that the upper bound is far from the practical susceptibility to membership inference. Thus, small \(\epsilon \) in central differential privacy and large \(\epsilon \) in local differential privacy result in similar membership inference risks, and local differential privacy can be a meaningful alternative to central differential privacy for differentially private deep learning besides the comparatively higher privacy parameters.
J. Robl and P. W. Grassal—Contributed equally to this research.
P. W. Grassal and S. Schneider—This work was done during an internship at SAP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We used Tensorflow Privacy: https://github.com/tensorflow/privacy.
- 2.
We provide this dataset along with all evaluation code on GitHub: https://github.com/SAP-samples/security-research-membership-inference-and-differential-privacy.
References
Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016)
Abowd, J.M., Schmutte, I.M.: An economic analysis of privacy protection and statistical accuracy as social choices. Am. Econ. Rev. 109(1), 171–202 (2019)
Backes, M., Berrang, P., Humbert, M., Manoharan, P.: Membership privacy in microRNA-based studies. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016)
Bassily, R., Smith, A., Thakurta, A.: Private empirical risk minimization. In: Proceedings of Symposium on Foundations of Computer Science (FOCS). IEEE Computer Society (2014)
BBC News: Google DeepMind NHS app test broke UK privacy law (2017). https://www.bbc.com/news/technology-40483202
Carlini, N., Liu, C., Kos, J., Erlingsson, Ú., Song, D.: The secret sharer: measuring unintended neural network memorization and extracting secrets (2018)
Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of Conference on Machine Learning (ICML). Omnipress (2006)
Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1
Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theoret. Comput. Sci. 9(3–4), 211–407 (2014)
Erlingsson, U., Feldman, V., Mironov, I., Raghunathan, A., Talwar, K., Thakurta, A.: Amplification by shuffling: from local to central differential privacy via anonymity. In: Proceedings of Symposium on Discrete Algorithms (SODA) (2019)
Erlingsson, U., Pihur, V., Korolova, A.: RAPPOR: randomized aggregatable privacy-preserving ordinal response. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2014)
Everingham, M., Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge. Int. J. Comput. Vis. 88(2), 98–136 (2010)
Fan, L.: Image pixelization with differential privacy. In: Kerschbaum, F., Paraboschi, S. (eds.) DBSec 2018. LNCS, vol. 10980, pp. 148–162. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-95729-6_10
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015)
Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: Proceedings of USENIX Security Symposium. USENIX Association (2014)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
Grandvalet, Y., Canu, S.: Comments on “noise injection into inputs in back propagation learning’’. IEEE Trans. Syst. Man Cybernet. 25(4), 678–681 (1995)
Hay, M., Machanavajjhala, A., Miklau, G., Chen, Y., Zhang, D.: Principled evaluation of differentially private algorithms using DPBench. In: Proceedings of Conference on Management of Data (SIGMOD). ACM Press (2016)
Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: membership inference attacks against generative models. Proc. Priv. Enhanc. Technol. (PoPETs) 2019(1), 133–152 (2019)
Kashmir Hill: How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did (2012). https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/
Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. University of Massachusetts, Technical report (2007)
Iyengar, R., Near, J.P., Song, D., Thakkar, O.D., Thakurta, A., Wang, L.: Towards practical differentially private convex optimization. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2019)
Jayaraman, B., Evans, D.: Evaluating differentially private machine learning in practice. In: Proceedings of the USENIX Security Symposium. USENIX Association (2019)
Kairouz, P., Oh, S., Viswanath, P.: The composition theorem for differential privacy. IEEE Trans. Inf. Theory 63(6), 4037–4049 (2017)
Kasiviswanathan, S.P., Lee, H.K., Nissim, K., Raskhodnikova, S., Smith, A.: What can we learn privately? SIAM J. Comput. 40, 793–826 (2008)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings. ICLR (2015)
Matsuoka, K.: Noise injection into inputs in back-propagation learning. IEEE Trans. Syst. Man Cybernet. 22(3), 36–440 (1992)
Mironov, I.: Rényi differential privacy. In: Proceedings of Computer Security Foundations Symposium (CSF). IEEE Computer Society (2017)
MLPerf Website: MLPerf - Fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services (2018). https://mlperf.org/
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks (2018)
Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2018)
Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., Erlingsson, Ú.: Scalable private learning with pate (2018)
Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: British Machine Vision Conference. BMVA Press (2015)
Rahman, M.A., Rahman, T., Laganière, R., Mohammed, N.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11, 61–79 (2018)
Sankararaman, S., Obozinski, G., Jordan, M.I., Halperin, E.: Genomic privacy and limits of individual detection in a pool. Nature Genetics 41, 965–967 (2009)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against ML models. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2017)
Song, S., Chaudhuri, K., Sarwate, A.D.: Stochastic gradient descent with differentially private updates. In: Proceedings of Conference on Signal and Information Processing. IEEE Computer Society (2013)
Wang, T., Blocki, J., Li, N., Jha, S.: Locally differentially private protocols for frequency estimation. In: Proceedings of USENIX Security Symposium. USENIX Association (2017)
Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)
Wirth, R., Hipp, J.: Crisp-DM: towards a standard process model for data mining. In: Proceedings of Conference on Practical Applications of Knowledge Discovery and Data Mining. Practical Application Company (2000)
Yeom, S., Fredrikson, M., Jha, S.: The unintended consequences of overfitting: training data inference attacks (2017)
Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting (2018)
Acknowledgements
This work has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 825333 (MOSAICROWN).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Neural network models and composed \(\epsilon \) for LDP are provided in Table 1. We state hyperparameters, composed \(\epsilon \) for CDP, and training accuracies in Table 2. Texas Hospitals Stays and Purchases Shopping Carts provided by Shokri et al. are unbalanced in terms of records per class, as shown in Figs. 6 and 7.
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Bernau, D., Robl, J., Grassal, P.W., Schneider, S., Kerschbaum, F. (2021). Comparing Local and Central Differential Privacy Using Membership Inference Attacks. In: Barker, K., Ghazinour, K. (eds) Data and Applications Security and Privacy XXXV. DBSec 2021. Lecture Notes in Computer Science(), vol 12840. Springer, Cham. https://doi.org/10.1007/978-3-030-81242-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-81242-3_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81241-6
Online ISBN: 978-3-030-81242-3
eBook Packages: Computer ScienceComputer Science (R0)