Skip to main content

Survey and Planning of High-Payload Human-Robot Collaboration: Multi-modal Communication Based on Sensor Fusion

  • Conference paper
  • First Online:
Advanced Manufacturing and Automation IX (IWAMA 2019)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 634))

Included in the following conference series:

Abstract

Human-Robot Collaboration (HRC) has gained increased attention with the widespread commissioning and usage of collaborative robots. However, recent studies show that the fenceless collaborative robots are not as harmless as they look like. In addition, collaborative robots usually have a very limited payload (up to 12 kg), which is not satisfactory for most of the industrial applications. To use high-payload industrial robots in HRC, today’s safety systems has only one option, limiting speeds of robot motion execution and redundant systems for supervision of forces. The reduction of execution speed, reduces efficiency, which limits more widespread of automation. To overcome this limitation, in this paper, we propose novel sensor fusion of different safety related sensors and combine these in a way that they ensure safety, while the human operator can focus on task execution and communicate with the system in a natural way. Different communication channels are explored (multi-modal) and demonstration scenarios are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Youssefi, S., Denei, S., Mastrogiovanni, F.: A real-time data acquisition and processing framework for large-scale robot skin. Robot. Auton. Syst. 68, 86–103 (2015)

    Article  Google Scholar 

  2. IFF: Tactile sensor system, 15 February 2017. http://www.iff.fraunhofer.de/content/dam/iff/en/documents/publications/tactile-sensor-systems-fraunhofer-iff.pdf

  3. Haddadin, S.: Injury evaluation of human-robot impacts. In: IEEE International Conference on Robotics and Automation ICRA 2008 (2008)

    Google Scholar 

  4. https://www.pilz.com/en-INT/eshop/00106002207042/SafetyEYE-Safe-camera-system

  5. Szabo, S., Shackleford, W., Norcross, R., Marvel, J.: A testbed for evaluation of speed and separation monitoring in a human robot collaborative environment. NIST Interagency/Internal Report (NISTIR) – 7851 (2012)

    Google Scholar 

  6. Saenz, J., Vogel, C., Penzlin, F., Elkmann, N.: Safeguarding collaborative mobile manipulators - evaluation of the VALERI workspace monitoring system. Procedia Manuf. 11, 47–54 (2017)

    Article  Google Scholar 

  7. Baranyi, P., Solvang, B., Hashimoto, H., Korondi, P.: 3D Internet for cognitive info-communication. In: 10th International Symposium of Hungarian Researchers on Computational Intelligence and Informatics, CINTI 2009, pp. 229–243 (2009)

    Google Scholar 

  8. Gleeson, B., MacLean, K., Haddadi, A., Croft, E., Alcazar, J.: Gestures for industry Intuitive human-robot communication from human observation. In: 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, pp. 349–356 (2013)

    Google Scholar 

  9. Liu, H., Wang, L.: Gesture recognition for human-robot collaboration: a review. Int. J. Ind. Ergon. 68, 355–367 (2017)

    Article  Google Scholar 

  10. Cao Z., Simon T., Wei S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: CVPR (2017)

    Google Scholar 

  11. Simon T., Joo H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: CVPR (2017)

    Google Scholar 

  12. Vincze, D., Kovács, S., Gácsi, M., Korondi, P., Miklósi, A., Baranyi, P.: A novel application of the 3D VirCA environment: modeling a standard ethological test of dog-human interactions. Acta Polytech. Hung. 9(1), 107–120 (2012)

    Google Scholar 

  13. Herath, S., Harandi, M., Porikli, F.: Going deeper into action recognition: a survey. Image Vis. Comput. 60, 4–21 (2017)

    Article  Google Scholar 

  14. Baranyi, P., Nagy, I., Korondi, B., Hashimoto, H.: General guiding model for mobile robots and its complexity reduced neuro-fuzzy approximation. In: Ninth IEEE International Conference on Fuzzy Systems. FUZZ- IEEE 2000 (Cat. No. 00CH37063), San Antonio, TX, USA, vol. 2, pp. 1029–1032 (2000)

    Google Scholar 

  15. Shu, B., Sziebig, G., Pieskä, S.: Human-robot collaboration: task sharing through virtual reality. In: IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, pp. 6040–6044 (2018)

    Google Scholar 

  16. Reimann, J., Sziebig, G.: The intelligent factory space – a concept for observing, learning and communicating in the digitalized factory. IEEE Access 7, 70891–70900 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

The work reported in this paper was supported by the centre for research based innovation SFI Manufacturing in Norway, and is partially funded by the Research Council of Norway under contract number 237900.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gabor Sziebig .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sziebig, G. (2020). Survey and Planning of High-Payload Human-Robot Collaboration: Multi-modal Communication Based on Sensor Fusion. In: Wang, Y., Martinsen, K., Yu, T., Wang, K. (eds) Advanced Manufacturing and Automation IX. IWAMA 2019. Lecture Notes in Electrical Engineering, vol 634. Springer, Singapore. https://doi.org/10.1007/978-981-15-2341-0_68

Download citation

Publish with us

Policies and ethics