Skip to main content

Towards Effective Device-Aware Federated Learning

  • Conference paper
  • First Online:
AI*IA 2019 – Advances in Artificial Intelligence (AI*IA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11946))

Abstract

With the wealth of information produced by social networks, smartphones, medical or financial applications, speculations have been raised about the sensitivity of such data in terms of users’ personal privacy and data security. To address the above issues, Federated Learning (FL) has been recently proposed as a means to leave data and computational resources distributed over a large number of nodes (clients) where a central coordinating server aggregates only locally computed updates without knowing the original data. In this work, we extend the FL framework by pushing forward the state the art in the field on several dimensions: (i) unlike the original FedAvg approach relying solely on single criteria (i.e., local dataset size), a suite of domain- and client-specific criteria constitute the basis to compute each local client’s contribution, (ii) the multi-criteria contribution of each device is computed in a prioritized fashion by leveraging a priority-aware aggregation operator used in the field of information retrieval, and (iii) a mechanism is proposed for online-adjustment of the aggregation operator parameters via a local search strategy with backtracking. Extensive experiments on a publicly available dataset indicate the merits of the proposed approach compared to standard FedAvg baseline.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We should be reminded that the proposed adjustment algorithm may involve some communication and computational overhead due to the need of evaluating each of the candidate global models on local test data. We have not included this overhead in the count of rounds, since in the literature of FL a round of communication is defined as the entire process of model exchanging between clients and server and local model training [11]. Alternatively, we could define these extra rounds as testing rounds, which imply the same communication cost as a round of communication, but a significantly lower computational power. In the worst case, we would need m! testing rounds for each round of communication, where m is the number of criteria.

  2. 2.

    We chose these values since they represent reasonable accuracy values and higher were not reached in the 1,000 allowed rounds of communication.

  3. 3.

    The total number of participating devices in the federation is 371, thus 20%, as an example, indicates the round of communication required for \(0.2\times 317=75\) devices to reach the desired target accuracy.

  4. 4.

    We remember here that a preference relation \(\succ \) is transitive. Hence Ds \(\succ \) Md \(\succ \) Ld implies Ds \(\succ \) Ld.

References

  1. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. arXiv preprint arXiv:1807.00459 (2018)

  2. Bonawitz, K., et al.: Towards federated learning at scale: system design. CoRR abs/1902.01046 (2019). http://arxiv.org/abs/1902.01046

  3. Caldas, S., et al.: Leaf: a benchmark for federated settings. arXiv preprint arXiv:1812.01097 (2018)

  4. Choquet, G.: Theory of capacities. Annales de l’Institut Fourier 5, 131–295 (1954). https://doi.org/10.5802/aif.53

    Article  MathSciNet  MATH  Google Scholar 

  5. Cohen, G., Afshar, S., Tapson, J., van Schaik, A.: EMNIST: extending MNIST to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2921–2926. IEEE (2017)

    Google Scholar 

  6. da Costa Pereira, C., Dragoni, M., Pasi, G.: Multidimensional relevance: prioritized aggregation in a personalized information retrieval setting. Inf. Process. Manag. 48(2), 340–357 (2012). https://doi.org/10.1016/j.ipm.2011.07.001

    Article  Google Scholar 

  7. Grabisch, M.: The application of fuzzy integrals in multicriteria decision making. Eur. J. Oper. Res. 89(3), 445–456 (1996). https://doi.org/10.1016/0377-2217(95)00176-X. http://www.sciencedirect.com/science/article/pii/037722179500176X

    Article  MATH  Google Scholar 

  8. Grabisch, M., Roubens, M.: Application of the Choquet integral in multicriteria decision making. In: Fuzzy Measures and Integrals, pp. 348–374 (2000)

    Google Scholar 

  9. Konecný, J., McMahan, B., Ramage, D.: Federated optimization: distributed optimization beyond the datacenter. CoRR abs/1511.03575 (2015). http://arxiv.org/abs/1511.03575

  10. Konecný, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: distributed machine learning for on-device intelligence. CoRR abs/1610.02527 (2016). http://arxiv.org/abs/1610.02527

  11. Konecný, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. CoRR abs/1610.05492 (2016). http://arxiv.org/abs/1610.05492

  12. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  13. Marrara, S., Pasi, G., Viviani, M.: Aggregation operators in information retrieval. Fuzzy Sets Syst. 324, 3–19 (2017). https://doi.org/10.1016/j.fss.2016.12.018

    Article  MathSciNet  MATH  Google Scholar 

  14. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, FL, USA, 20–22 April 2017, pp. 1273–1282 (2017). http://proceedings.mlr.press/v54/mcmahan17a.html

  15. Miller, K.W., Voas, J.M., Hurlburt, G.F.: BYOD: security and privacyconsiderations. IT Prof. 14(5), 53–55 (2012). https://doi.org/10.1109/MITP.2012.93

    Article  Google Scholar 

  16. Sahu, A.K., Li, T., Sanjabi, M., Zaheer, M., Talwalkar, A., Smith, V.: On the convergence of federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127 (2018)

  17. Yager, R.R.: On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans. Syst. Man Cybern. 18(1), 183–190 (1988). https://doi.org/10.1109/21.87068

    Article  MathSciNet  MATH  Google Scholar 

  18. Yager, R.R.: Quantifier guided aggregation using OWA operators. Int. J. Intell. Syst. 11(1), 49–73 (1996)

    Article  Google Scholar 

  19. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. CoRR abs/1806.00582 (2018). http://arxiv.org/abs/1806.00582

Download references

Acknowledgements

The authors wish to thank Angelo Schiavone for fruitful discussions and for helping with the implementation of the framework.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antonio Ferrara .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Anelli, V.W., Deldjoo, Y., Di Noia, T., Ferrara, A. (2019). Towards Effective Device-Aware Federated Learning. In: Alviano, M., Greco, G., Scarcello, F. (eds) AI*IA 2019 – Advances in Artificial Intelligence. AI*IA 2019. Lecture Notes in Computer Science(), vol 11946. Springer, Cham. https://doi.org/10.1007/978-3-030-35166-3_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-35166-3_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-35165-6

  • Online ISBN: 978-3-030-35166-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics