Skip to main content

Data-Driven Automotive Development: Federated Reinforcement Learning for Calibration and Control

  • Conference paper
  • First Online:
22. Internationales Stuttgarter Symposium

Zusammenfassung

The importance of data-driven methods in automotive development continuously increases. In this area, reinforcement learning methods show great potential, but the required data from system interaction can be expensive to produce during the traditional development process. In the automotive industry, data collection is additionally constrained by privacy aspects with regard to intellectual property interests or customer data. Suitable reinforcement learning approaches need to overcome these challenges for effective and efficient learning. One possible solution is the utilization of federated learning that enables learning on distributed data through model aggregation. Therefore, we investigate the federated reinforcement learning methodology and propose a concept for a continuous automotive development process. The concept contributes separated training loops for the development and for the field operation. Furthermore, we present a customization and verification procedure within the aggregation step. The approach is exemplary shown for an electric motor current control.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 179.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 229.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Literatur

  1. Petersen, P., Langner, J., Otten, S., et al.: Validation of range estimation for electric vehicles based on recorded real-world driving data. In: Bargende, M., Reuss, H.C., Wagner, A., Wiedemann, J. (eds.) 19. Internationales Stuttgarter Symposium, Proceedings, pp. 331–344. Springer Fachmedien, Wiesbaden (2019). https://doi.org/10.1007/978-3-658-25939-6_29

  2. Puccetti, L., Yasser, A., Rathgeber, C., et al.: Speed tracking control using model-based reinforcement learning in a real vehicle. 2021 IEEE Intelligent Vehicles Symposium (IV), pp. 1213–1219 (2021). https://doi.org/10.1109/IV48863.2021.9576031

  3. Thorgeirsson, A.T., Gauterin, F.: Probabilistic predictions with federated learning. Entropy 23(1), 41 (2020). https://doi.org/10.3390/e23010041

    Article  MathSciNet  Google Scholar 

  4. Kairouz, P., McMahan, H.B., Avent, B., et al.: Advances and open problems in federated learning. arXiv:1912.04977 [cs, stat] (2021)

  5. Recht, B.: A tour of reinforcement learning: the view from continuous control. Annu. Rev. Control Robot. Auton. Syst. 2(1), 253–279 (2019). https://doi.org/10.1146/annurev-control-053018-023825

    Article  Google Scholar 

  6. Silver, D., Schrittwieser, J., Simonyan, K., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017). https://doi.org/10.1038/nature24270

    Article  Google Scholar 

  7. Haarnoja, T., Zhou, A., Hartikainen, K., et al.: Soft actor-critic algorithms and applications. arXiv:1812.05905 [cs, stat] (2019)

  8. Köpf, F., Puccetti, L., Rathgeber, C., Hohmann, S.: Reinforcement learning for speed control with feedforward to track velocity profiles in a real vehicle. 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8 (2020). https://doi.org/10.1109/ITSC45102.2020.9294541

  9. Tian, Y., Chao, M.A., Kulkarni, C., et al.: Real-time model calibration with deep reinforcement learning. Mech. Syst. Signal Process. 165(108):284 (2022). https://doi.org/10.1016/j.ymssp.2021.108284

  10. Mehndiratta, M., Camci, E., Kayacan, E.: Automated tuning of nonlinear model predictive controller by reinforcement learning. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3016–3021 (2018). https://doi.org/10.1109/IROS.2018.8594350

  11. Lawrence, N.P., Stewart, G.E., Loewen, P.D., et al.: Optimal PID and Antiwindup control design as a reinforcement learning problem. IFAC-PapersOnLine 53(2), 236–241 (2020). https://doi.org/10.1016/j.ifacol.2020.12.129

    Article  Google Scholar 

  12. Wang, X., Krasowski, H., Althoff, M.: CommonRoad-RL: a configurable reinforcement learning environment for motion planning of autonomous vehicles. 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 466–472 (2021). https://doi.org/10.1109/ITSC48978.2021.9564898

  13. Konečný, J., McMahan, H.B., Yu, F.X., et al.: Federated learning: strategies for improving communication efficiency. arXiv:1610.05492 [cs] (2017)

  14. Qi, J., Zhou, Q., Lei, L., Zheng, K.: Federated reinforcement learning: techniques, applications, and open challenges. arXiv:2108.11887 [cs] (2021)

  15. McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  16. Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning. Eighth International Conference on Learning Representations (2020)

    Google Scholar 

  17. Reddi, S.J., Charles, Z., Zaheer, M., et al.: Adaptive federated optimization. International Conference on Learning Representations (2020)

    Google Scholar 

  18. Du, Z., Wu, C., Yoshinaga, T., et al.: Federated learning for vehicular internet of things: recent advances and open issues. IEEE Open J. Comput. Soc. 1, 45–61 (2020). https://doi.org/10.1109/OJCS.2020.2992630

    Article  Google Scholar 

  19. Mohri, M., Sivek, G., Suresh, A.T.: Agnostic federated learning. Proceedings of the 36th International Conference on Machine Learning, pp. 4615–4625. PMLR (2019)

    Google Scholar 

  20. Lim, H.K., Kim, J.B., Ullah, I., et al.: Federated reinforcement learning acceleration method for precise control of multiple devices. IEEE Access 9, 76296–76306 (2021). https://doi.org/10.1109/ACCESS.2021.3083087

    Article  Google Scholar 

  21. Liu, B., Wang, L., Liu, M.: Lifelong federated reinforcement learning: a learning architecture for navigation in cloud robotic systems. IEEE Robot. Autom. Lett. 4(4), 4555–4562 (2019). https://doi.org/10.1109/LRA.2019.2931179

    Article  Google Scholar 

  22. Nadiger, C., Kumar, A., Abdelhak, S.: Federated reinforcement learning for fast personalization. 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 123–127 (2019). https://doi.org/10.1109/AIKE.2019.00031

  23. Fan, X., Ma, Y., Dai, Z., et al.: Fault-tolerant federated reinforcement learning with theoretical guarantee. NeurIPS 34, 39 (2021)

    Google Scholar 

  24. Book, G., Traue, A., Balakrishna, P., et al.: Transferring online reinforcement learning for electric motor control from simulation to real-world experiments. IEEE Open J of Power Electron. 2, 187–201 (2021). https://doi.org/10.1109/OJPEL.2021.3065877

    Article  Google Scholar 

  25. Balakrishna, P., Book, G., Kirchgässner, W., et al.: Gym-electric-motor (GEM): a python toolbox for the simulation of electric drive systems. J. Open Source Softw. 6(58), 2498 (2021). https://doi.org/10.21105/joss.02498

    Article  Google Scholar 

  26. Raffin, A., Hill, A., Gleave, A., et al.: Stable-baselines3: reliable reinforcement learning implementations. JMLR 22, 8 (2021)

    Google Scholar 

  27. Kuznetsov, A., Shvechikov, P., Grishin, A., Vetrov, D.: Controlling overestimation bias with truncated mixture of continuous distributional quantile critics. In: Proceedings of the 37th International Conference on Machine Learning. arXiv:2005.04269 [cs, stat] (2020)

  28. Beutel, D.J., Topal, T., Mathur, A., et al.: Flower: a friendly federated learning research framework. arXiv:2007.14390 [cs, stat] (2021)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Rudolf .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rudolf, T., Schürmann, T., Skull, M., Schwab, S., Hohmann, S. (2022). Data-Driven Automotive Development: Federated Reinforcement Learning for Calibration and Control. In: Bargende, M., Reuss, HC., Wagner, A. (eds) 22. Internationales Stuttgarter Symposium. Proceedings. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-37009-1_26

Download citation

Publish with us

Policies and ethics