Skip to content
Licensed Unlicensed Requires Authentication Published by Oldenbourg Wissenschaftsverlag February 17, 2023

Automated end-of-line quality assurance with visual inspection and convolutional neural networks

  • Hangbeom Kim

    Hangbeom Kim received his B. Sc. and M. Sc. in Electrical Engineering from Kookmin University in 2016 and in Information Technology from the University of Stuttgart in 2018, respectively. Since 2019, he has been working as a research associate in the Department Machine Vision and Signal Processing at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA. His research focuses on computer vision and machine learning.

    EMAIL logo
    , Andreas Frommknecht

    Andreas Frommknecht received his diploma in mathematics from University of Ulm in 2011 and his Ph. D. degree in mechanical engineering from the University of Stuttgart in 2021. Since 2011, he has started working as a research associate in the Department Machine Vision and Signal Processing at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA. Since 2020, he leads the group optical measurement and testing systems in the same department. His research focus is on analytical and machine learning based visual data processing.

    , Bernd Bieberstein

    Bernd Bieberstein studied mechanical engineering in Stuttgart from 1983 to 1990, and graduated with a Dipl.-Ing. degree in 1990. After his studies, he started working at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart in the department of Machine Vision and Signal Processing. He is specialized in analytical image processing in the field of serial production. This includes dimensional measurement as well as texture analysis.

    , Janek Stahl

    Janek Stahl is a research associate at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, Germany since 2015. After studying mechanical engineering at HTWG Konstanz and the University of Stuttgart, he began working there in the Machine Vision and Signal Processing department, where he is involved in machine learning and texture analysis in the field of 2D image processing for industrial applications.

    and Marco F. Huber

    Marco Huber received his diploma, Ph. D., and habilitation degrees in computer science from the Karlsruhe Institute of Technology (KIT), Germany, in 2006, 2009, and 2015, respectively. From June 2009 to May 2011, he was leading the research group Variable Image Acquisition and Processing of the Fraunhofer IOSB, Karlsruhe, Germany. Subsequently, he was Senior Researcher with AGT International, Darmstadt, Germany, until March 2015. From April 2015 to September 2018, he was responsible for product development and data science services of the Katana division at USU Software AG, Karlsruhe, Germany. At the same time he was adjunct professor of computer science with the KIT. Since October 2018 he is full professor with the University of Stuttgart. He further is director of the Department Cyber Cognitive Intelligence (CCI) and of the Department Machine Vision and Signal Processing with Fraunhofer IPA in Stuttgart, Germany. His research interests include machine learning, planning and decision making, image processing, and robotics.

    ORCID logo
From the journal tm - Technisches Messen

Abstract

End-of-line (EOL) quality assurance of finished components has so far required additional manual inspections and burdened manufacturers with high labor costs. To automate the EOL process, in this paper a fully AI-based quality classification system is introduced. The components are automatically placed under the optical inspection system employing a robot. A Convolutional Neural Network (CNN) is used for the quality classification of the recorded images. After quality control, the component is sorted automatically in different bins depending on the quality control result. The trained CNN models achieve up to 98.7% accuracy on the test data. The classification performance of the CNN is compared with that of a rule-based approach. Additionally, the trained classification model is interpreted by an explainable AI method to make it comprehensible for humans and reassure them about its trustworthiness. This work originated from an actual industrial use case from Witzenmann GmbH. Together with the company, a demonstrator was realized.


Corresponding author: Hangbeom Kim, Department Machine Vision and Signal Processing, Fraunhofer-Institut für Produktionstechnik und Automatisierung (IPA), Stuttgart, Germany, E-mail:

Funding source: Ministerium für Wirtschaft, Arbeit und Tourismus Baden-Württemberg

Award Identifier / Grant number: 036-140100

About the authors

Hangbeom Kim

Hangbeom Kim received his B. Sc. and M. Sc. in Electrical Engineering from Kookmin University in 2016 and in Information Technology from the University of Stuttgart in 2018, respectively. Since 2019, he has been working as a research associate in the Department Machine Vision and Signal Processing at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA. His research focuses on computer vision and machine learning.

Andreas Frommknecht

Andreas Frommknecht received his diploma in mathematics from University of Ulm in 2011 and his Ph. D. degree in mechanical engineering from the University of Stuttgart in 2021. Since 2011, he has started working as a research associate in the Department Machine Vision and Signal Processing at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA. Since 2020, he leads the group optical measurement and testing systems in the same department. His research focus is on analytical and machine learning based visual data processing.

Bernd Bieberstein

Bernd Bieberstein studied mechanical engineering in Stuttgart from 1983 to 1990, and graduated with a Dipl.-Ing. degree in 1990. After his studies, he started working at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart in the department of Machine Vision and Signal Processing. He is specialized in analytical image processing in the field of serial production. This includes dimensional measurement as well as texture analysis.

Janek Stahl

Janek Stahl is a research associate at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, Germany since 2015. After studying mechanical engineering at HTWG Konstanz and the University of Stuttgart, he began working there in the Machine Vision and Signal Processing department, where he is involved in machine learning and texture analysis in the field of 2D image processing for industrial applications.

Marco F. Huber

Marco Huber received his diploma, Ph. D., and habilitation degrees in computer science from the Karlsruhe Institute of Technology (KIT), Germany, in 2006, 2009, and 2015, respectively. From June 2009 to May 2011, he was leading the research group Variable Image Acquisition and Processing of the Fraunhofer IOSB, Karlsruhe, Germany. Subsequently, he was Senior Researcher with AGT International, Darmstadt, Germany, until March 2015. From April 2015 to September 2018, he was responsible for product development and data science services of the Katana division at USU Software AG, Karlsruhe, Germany. At the same time he was adjunct professor of computer science with the KIT. Since October 2018 he is full professor with the University of Stuttgart. He further is director of the Department Cyber Cognitive Intelligence (CCI) and of the Department Machine Vision and Signal Processing with Fraunhofer IPA in Stuttgart, Germany. His research interests include machine learning, planning and decision making, image processing, and robotics.

Acknowledgment

The authors would like to thank Witzenmann GmbH for the great support and productive collaboration during the project.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: The described work was funded by the Baden-Wuerttemberg Ministry for Economic Affairs, Labour and Tourism (Project KI-Fortschrittszentrum “Lernende Systeme und Kognitive Robotik”).

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

[1] R. D. Schraft and T. Ledermann, “Intelligent picking of chaotically stored objects,” Assemb. Autom., vol. 23, pp. 38–42, 2003. https://doi.org/10.1108/01445150310460079.Search in Google Scholar

[2] F. Spenrath and A. Pott, “Gripping point determination for bin picking using heuristic search,” in 10th CIRP Conference on Intelligent Computation in Manufacturing Engineering – CIRP ICME ’16 62, Ischia, Italy, 20–22 July 2016, 2017, pp. 606–611.10.1016/j.procir.2016.06.015Search in Google Scholar

[3] Planned Publication: T. Nickel, “Sensor-Guided adaptive bin packing for spherical objects,” in 56th CIRP Manufacturing Systems Conference, Cape Town, South Africa, 2023, pp. 24–26.Search in Google Scholar

[4] R. T. Chin and C. A. Harlow, “Automated visual inspection: a survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-4, pp. 557–573, 1982.10.1109/TPAMI.1982.4767309Search in Google Scholar PubMed

[5] T. S. Newman and A. K. Jain, “A survey of automated visual inspection,” Comput. Vis. Image Underst., vol. 61, pp. 231–262 1995. https://doi.org/10.1006/cviu.1995.1017.Search in Google Scholar

[6] R. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern., vol. SMC-3, no. 6, pp. 610–621, 1973. https://doi.org/10.1109/tsmc.1973.4309314.Search in Google Scholar

[7] A. Noble, V. D. Nguyen, C. Marinos, et al.., “Template guided visual inspection,” in Proceedings of the Second European Conference on Computer Vision, 1992, pp. 893–901.10.1007/3-540-55426-2_104Search in Google Scholar

[8] B. van Ginneken, “Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning,” Radiol. Phys. Technol., vol. 10, no. 1, pp. 23–32, 2017. https://doi.org/10.1007/s12194-017-0394-5.Search in Google Scholar PubMed PubMed Central

[9] F. Zhao, Y. Chen, Y. Hou, and X. He, “Segmentation of blood vessels using rule-based and machine-learning-based methods: a review,” Multimed. Syst., vol. 25, no. 2, pp. 109–118, 2019. https://doi.org/10.1007/s00530-017-0580-7.Search in Google Scholar

[10] X. Qinchuan, L. Jing, L. Ziming, L. Yaoming, and Z. Xuewen, “Evaluations and comparisons of rule-based and machine-learning-based methods to retrieve satellite-based vegetation phenology using MODIS and USA national phenology network data,” Int. J. Appl. Earth Obs. Geoinf., vol. 93, p. 102189, 2020. https://doi.org/10.1016/j.jag.2020.102189.Search in Google Scholar

[11] T. Toulouse, L. Rossi, T. Celik, and M. Akhloufi, “Automatic fire pixel detection using image processing: a comparative analysis of rule-based and machine learning-based methods,” Signal Image Video Process., vol. 10, no. 4, pp. 647–654, 2016. https://doi.org/10.1007/s11760-015-0789-x.Search in Google Scholar

[12] T. Czimmermann, G. Ciuti, M. Milazzo, et al.., “Visual-based defect detection and classification approaches for industrial applications—a SURVEY,” Sensors, vol. 20, no. 5, p. 1459, 2020. https://doi.org/10.3390/s20051459.Search in Google Scholar PubMed PubMed Central

[13] X. Tong, Z. Yu, X. Tian, H. Ge, and X. Wang, “Improving accuracy of automatic optical inspection with machine learning,” Front. Comput. Sci, vol. 16, p. 1, 2022. https://doi.org/10.1007/s11704-021-0244-9.Search in Google Scholar

[14] J. Redi, P. Gastaldo, and R. Zunino, “Supporting visual quality assessment with machine learning,” EURASIP J. Image Video Process., vol. 2013, 2013, Art. no. 54. https://doi.org/10.1186/1687-5281-2013-54.Search in Google Scholar

[15] A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” in 2018 International Interdisciplinary PhD Workshop (IIPhDW), 2018, pp. 117–122.10.1109/IIPHDW.2018.8388338Search in Google Scholar

[16] L. Perez and J. Wang, “The effectiveness of data augmentation in image classification using deep learning,” arXiv, vol. abs/1712.04621, 2017.Search in Google Scholar

[17] S. Loussaief and A. Abdelkrim, “Machine Learning framework for image classification,” Adv. Sci. Technol. Eng. Syst., vol. 3, pp. 1–10, 2018. https://doi.org/10.25046/aj030101.Search in Google Scholar

[18] M. Manoj krishna, M. Neelima, M. Harshali, and M. V. G. Rao, “Image classification using Deep learning,” Int. J. Eng. Technol., vol. 7, p. 614, 2018. https://doi.org/10.14419/ijet.v7i2.7.10892.Search in Google Scholar

[19] K. O’Shea and R. Nash, “An Introduction to Convolutional Neural Networks,” arXiv, vol. abs/1511.08458, 2015.Search in Google Scholar

[20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.10.1109/CVPR.2016.90Search in Google Scholar

[21] K. Weiss, T. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” J. Big Data, vol. 3, 2016, Art. no. 9. https://doi.org/10.1186/s40537-016-0043-6.Search in Google Scholar

[22] A. Paszke, S. Gross, S. Chintala, et al.., Automatic Differentiation in PyTorch, NIPS 2017 Workshop Autodiff, 2017.Search in Google Scholar

[23] TorchVision Maintainers and Contributors, TorchVision, PyTorch’s Computer Vision library, 2016. Available at: https://github.com/pytorch/vision.Search in Google Scholar

[24] H. E. Robbins, “A stochastic approximation method,” Ann. Math. Stat., vol. 22, pp. 400–407, 2007. https://doi.org/10.1214/aoms/1177729586.Search in Google Scholar

[25] D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv, abs/1412.6980, 2015.Search in Google Scholar

[26] N. Schaaf, O. de Mitri, H. Kim, and M. Huber, “Towards measuring bias in image classification,” in ICANN 2021, 2021, pp. 433–445.10.1007/978-3-030-86365-4_35Search in Google Scholar

[27] N. Burkart, D. Brajovic, and M. Huber, “Explainable AI: introducing trust and comprehensibility to AI engineering,” Automatisierungstechnik, vol. 70, no. 2022, pp. 787–792, 2022. https://doi.org/10.1515/auto-2022-0013.Search in Google Scholar

[28] J. M. Oramas, K. Wang, and T. Tuytelaars, “Visual explanation by interpretation: improving visual feedback capabilities of deep neural networks,” in International Conference on Learning Representations ICLR, 2017.Search in Google Scholar

[29] S. Zhu, T. Yang, and C. Chen, “Visual explanation for deep metric learning,” IEEE Trans. Image Process., vol. 30, pp. 7593–7607, 2021. https://doi.org/10.1109/tip.2021.3107214.Search in Google Scholar

[30] D. Anguita, L. Ghelardoni, A. Ghio, L. Oneto, and S. Ridella, “The ’K’ in K-fold cross validation,” in The European Symposium on Artificial Neural Networks, 2012.Search in Google Scholar

[31] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618-626. https://dx.doi.org/10.1109/ICCV.2017.74.10.1109/ICCV.2017.74Search in Google Scholar

[32] H. Wang, Z. Wang, M. Du, et al.., “Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Computer Society, 2019, pp. 111–119.10.1109/CVPRW50498.2020.00020Search in Google Scholar

Received: 2022-10-18
Accepted: 2023-01-26
Published Online: 2023-02-17
Published in Print: 2023-03-28

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 22.5.2024 from https://www.degruyter.com/document/doi/10.1515/teme-2022-0092/html
Scroll to top button