Skip to main content

Comparing Local and Central Differential Privacy Using Membership Inference Attacks

  • Conference paper
  • First Online:
Data and Applications Security and Privacy XXXV (DBSec 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12840))

Included in the following conference series:

Abstract

Attacks that aim to identify the training data of neural networks represent a severe threat to the privacy of individuals in the training dataset. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy, and need to select meaningful privacy parameters \(\epsilon \). A comparison of local and central differential privacy based on the privacy parameters furthermore potentially leads data scientists to incorrect conclusions, since the privacy parameters are reflecting different types of mechanisms.

Instead, we empirically compare the relative privacy-accuracy trade-off of one central and two local differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects a lower bound on inference risk and differential privacy formulates an upper bound, our experiments with several datasets show that the privacy-accuracy trade-off is similar for both types of mechanisms despite the large difference in their upper bound. This suggests that the upper bound is far from the practical susceptibility to membership inference. Thus, small \(\epsilon \) in central differential privacy and large \(\epsilon \) in local differential privacy result in similar membership inference risks, and local differential privacy can be a meaningful alternative to central differential privacy for differentially private deep learning besides the comparatively higher privacy parameters.

J. Robl and P. W. Grassal—Contributed equally to this research.

P. W. Grassal and S. Schneider—This work was done during an internship at SAP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We used Tensorflow Privacy: https://github.com/tensorflow/privacy.

  2. 2.

    We provide this dataset along with all evaluation code on GitHub: https://github.com/SAP-samples/security-research-membership-inference-and-differential-privacy.

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016)

    Google Scholar 

  2. Abowd, J.M., Schmutte, I.M.: An economic analysis of privacy protection and statistical accuracy as social choices. Am. Econ. Rev. 109(1), 171–202 (2019)

    Article  Google Scholar 

  3. Backes, M., Berrang, P., Humbert, M., Manoharan, P.: Membership privacy in microRNA-based studies. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016)

    Google Scholar 

  4. Bassily, R., Smith, A., Thakurta, A.: Private empirical risk minimization. In: Proceedings of Symposium on Foundations of Computer Science (FOCS). IEEE Computer Society (2014)

    Google Scholar 

  5. BBC News: Google DeepMind NHS app test broke UK privacy law (2017). https://www.bbc.com/news/technology-40483202

  6. Carlini, N., Liu, C., Kos, J., Erlingsson, Ú., Song, D.: The secret sharer: measuring unintended neural network memorization and extracting secrets (2018)

    Google Scholar 

  7. Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of Conference on Machine Learning (ICML). Omnipress (2006)

    Google Scholar 

  8. Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1

    Chapter  Google Scholar 

  9. Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29

    Chapter  Google Scholar 

  10. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theoret. Comput. Sci. 9(3–4), 211–407 (2014)

    MathSciNet  MATH  Google Scholar 

  11. Erlingsson, U., Feldman, V., Mironov, I., Raghunathan, A., Talwar, K., Thakurta, A.: Amplification by shuffling: from local to central differential privacy via anonymity. In: Proceedings of Symposium on Discrete Algorithms (SODA) (2019)

    Google Scholar 

  12. Erlingsson, U., Pihur, V., Korolova, A.: RAPPOR: randomized aggregatable privacy-preserving ordinal response. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2014)

    Google Scholar 

  13. Everingham, M., Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge. Int. J. Comput. Vis. 88(2), 98–136 (2010)

    Article  Google Scholar 

  14. Fan, L.: Image pixelization with differential privacy. In: Kerschbaum, F., Paraboschi, S. (eds.) DBSec 2018. LNCS, vol. 10980, pp. 148–162. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-95729-6_10

    Chapter  Google Scholar 

  15. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015)

    Google Scholar 

  16. Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: Proceedings of USENIX Security Symposium. USENIX Association (2014)

    Google Scholar 

  17. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  18. Grandvalet, Y., Canu, S.: Comments on “noise injection into inputs in back propagation learning’’. IEEE Trans. Syst. Man Cybernet. 25(4), 678–681 (1995)

    Article  Google Scholar 

  19. Hay, M., Machanavajjhala, A., Miklau, G., Chen, Y., Zhang, D.: Principled evaluation of differentially private algorithms using DPBench. In: Proceedings of Conference on Management of Data (SIGMOD). ACM Press (2016)

    Google Scholar 

  20. Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: membership inference attacks against generative models. Proc. Priv. Enhanc. Technol. (PoPETs) 2019(1), 133–152 (2019)

    Google Scholar 

  21. Kashmir Hill: How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did (2012). https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

  22. Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. University of Massachusetts, Technical report (2007)

    Google Scholar 

  23. Iyengar, R., Near, J.P., Song, D., Thakkar, O.D., Thakurta, A., Wang, L.: Towards practical differentially private convex optimization. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2019)

    Google Scholar 

  24. Jayaraman, B., Evans, D.: Evaluating differentially private machine learning in practice. In: Proceedings of the USENIX Security Symposium. USENIX Association (2019)

    Google Scholar 

  25. Kairouz, P., Oh, S., Viswanath, P.: The composition theorem for differential privacy. IEEE Trans. Inf. Theory 63(6), 4037–4049 (2017)

    Google Scholar 

  26. Kasiviswanathan, S.P., Lee, H.K., Nissim, K., Raskhodnikova, S., Smith, A.: What can we learn privately? SIAM J. Comput. 40, 793–826 (2008)

    Article  MathSciNet  Google Scholar 

  27. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings. ICLR (2015)

    Google Scholar 

  28. Matsuoka, K.: Noise injection into inputs in back-propagation learning. IEEE Trans. Syst. Man Cybernet. 22(3), 36–440 (1992)

    Article  Google Scholar 

  29. Mironov, I.: Rényi differential privacy. In: Proceedings of Computer Security Foundations Symposium (CSF). IEEE Computer Society (2017)

    Google Scholar 

  30. MLPerf Website: MLPerf - Fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services (2018). https://mlperf.org/

  31. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks (2018)

    Google Scholar 

  32. Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2018)

    Google Scholar 

  33. Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., Erlingsson, Ú.: Scalable private learning with pate (2018)

    Google Scholar 

  34. Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: British Machine Vision Conference. BMVA Press (2015)

    Google Scholar 

  35. Rahman, M.A., Rahman, T., Laganière, R., Mohammed, N.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11, 61–79 (2018)

    Google Scholar 

  36. Sankararaman, S., Obozinski, G., Jordan, M.I., Halperin, E.: Genomic privacy and limits of individual detection in a pool. Nature Genetics 41, 965–967 (2009)

    Article  Google Scholar 

  37. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015)

    Google Scholar 

  38. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against ML models. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2017)

    Google Scholar 

  39. Song, S., Chaudhuri, K., Sarwate, A.D.: Stochastic gradient descent with differentially private updates. In: Proceedings of Conference on Signal and Information Processing. IEEE Computer Society (2013)

    Google Scholar 

  40. Wang, T., Blocki, J., Li, N., Jha, S.: Locally differentially private protocols for frequency estimation. In: Proceedings of USENIX Security Symposium. USENIX Association (2017)

    Google Scholar 

  41. Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)

    Article  Google Scholar 

  42. Wirth, R., Hipp, J.: Crisp-DM: towards a standard process model for data mining. In: Proceedings of Conference on Practical Applications of Knowledge Discovery and Data Mining. Practical Application Company (2000)

    Google Scholar 

  43. Yeom, S., Fredrikson, M., Jha, S.: The unintended consequences of overfitting: training data inference attacks (2017)

    Google Scholar 

  44. Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting (2018)

    Google Scholar 

Download references

Acknowledgements

This work has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 825333 (MOSAICROWN).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Bernau .

Editor information

Editors and Affiliations

Appendix

Appendix

Neural network models and composed \(\epsilon \) for LDP are provided in Table 1. We state hyperparameters, composed \(\epsilon \) for CDP, and training accuracies in Table 2. Texas Hospitals Stays and Purchases Shopping Carts provided by Shokri et al. are unbalanced in terms of records per class, as shown in Figs. 6 and 7.

Table 1. Overview of datasets considered in evaluation.
Table 2. Target Model training accuracy (from orig. to smallest \(\epsilon \)), CDP \(\epsilon \) values (from \(z = 0.5\) to \(z=16\)) and hyperparameters
Fig. 6.
figure 6

Quantity of records per label for Purchases Shopping Carts

Fig. 7.
figure 7

The Quantity of records per Label for the Texas Hospital Stays Dataset

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bernau, D., Robl, J., Grassal, P.W., Schneider, S., Kerschbaum, F. (2021). Comparing Local and Central Differential Privacy Using Membership Inference Attacks. In: Barker, K., Ghazinour, K. (eds) Data and Applications Security and Privacy XXXV. DBSec 2021. Lecture Notes in Computer Science(), vol 12840. Springer, Cham. https://doi.org/10.1007/978-3-030-81242-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-81242-3_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-81241-6

  • Online ISBN: 978-3-030-81242-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics