Elsevier

Additive Manufacturing

Volume 21, May 2018, Pages 517-528
Additive Manufacturing

Full Length Article
Application of supervised machine learning for defect detection during metallic powder bed fusion additive manufacturing using high resolution imaging.

https://doi.org/10.1016/j.addma.2018.04.005Get rights and content

Abstract

Process monitoring in additive manufacturing (AM) is a crucial component in the mission of broadening AM industrialization. However, conventional part evaluation and qualification techniques, such as computed tomography (CT), can only be utilized after the build is complete, and thus eliminate any potential to correct defects during the build process. In contrast to post-build CT, in situ defect detection based on in situ sensing, such as layerwise visual inspection, enables the potential for in-process re-melting and correction of detected defects and thus facilitates in-process part qualification. This paper describes the development and implementation of such an in situ defect detection strategy for powder bed fusion (PBF) AM using supervised machine learning.

During the build process, multiple images were collected at each build layer using a high resolution digital single-lens reflex (DSLR) camera. For each neighborhood in the resulting layerwise image stack, multi-dimensional visual features were extracted and evaluated using binary classification techniques, i.e. a linear support vector machine (SVM). Through binary classification, neighborhoods are then categorized as either a flaw, i.e. an undesirable interruption in the typical structure of the material, or a nominal build condition. Ground truth labels, i.e. the true location of flaws and nominal build areas, which are needed to train the binary classifiers, were obtained from post-build high-resolution 3D CT scan data. In CT scans, discontinuities, e.g. incomplete fusion, porosity, cracks, or inclusions, were identified using automated analysis tools or manual inspection. The xyz locations of the CT data were transferred into the layerwise image domain using an affine transformation, which was estimated using reference points embedded in the part. After the classifier had been properly trained, in situ defect detection accuracies greater than 80% were demonstrated during cross-validation experiments.

Introduction

Metallic additive manufacturing (AM) is a process in which near-net-shape parts are built in a layer-by-layer manner from powder alloys directly from digital files. Powder bed fusion AM (PBFAM) is a form of AM in which a laser selectively melts consecutive layers of metal powder placed on a build platform inside a build chamber [1]. After each melt cycle, a new layer of metal powder is spread across the build platform by a recoater blade, rake, or roller. PBFAM parts are made up of hundreds or thousands of layers (typically ∼20–60 μm layer thickness) dependent on part dimensions and material; and build times range from hours to days [2]. However, discontinuities in PBFAM parts—e.g. incomplete fusion, porosity, cracks, or inclusions—may arise from contamination or irregularities in powder recoating, laser-material interaction, or part solidification [3], and are thus a common concern, negatively affecting mechanical properties [[4], [5], [6]].

Detection and/or mitigation of such discontinuities may, however, be possible through monitoring of the AM build process. Indeed, research on process monitoring of AM systems has increased in the past decade, driven by AM’s appeal for manufacturing of complex high-value and low-volume production parts [7]. A common approach to process monitoring is measurement of melt pool characteristics. Melt-pool monitoring can provide insight into part quality and process phenomena, however melt-pool monitoring systems require tight integration with high speed scanners, and analysis can be complicated due to challenges when calibrating the emissivity of the melt pool [7]. Alternatively, imaging of build layers before or after a laser exposure offers an inexpensive and system-independent approach for defect detection compared to melt-pool metrology and post-evaluation techniques [8]. Here, this strategy is termed layerwise imaging.

Layerwise process monitoring has been attempted using visible-light and infrared (IR) imaging. Kleszczynski et al. [9] presented a system-independent layerwise-imaging system, with a 29-megapixel charge-coupled device (CCD) camera, which captured process irregularities on in situ surfaces of PBFAM parts. Jacobsmühlen et al. [10] expands on Kleszczynski et al. by building a connection between the detection of these surface irregularities and mechanical part performance. Mireles et al. [11] presented a low-resolution IR-layerwise imaging system on a PBFAM process to observe seeded void discontinuities in parts. Geometries of porous discontinuities were measured with contour tracing in the IR-layerwise and CT images. A formal comparison revealed that a substantial difference of measured geometry existed between the two domains. Schwerdtfeger et al. [12] demonstrated a correspondence between low-resolution IR-layerwise imaging and metallographic imaging of electron-beam-based PBFAM parts.

Despite the promise of layerwise monitoring, post-process inspection is still the de facto method for defect detection in AM parts. Spierings et al. [13] compared and contrasted techniques for post-process porosity analysis of PBF parts, including Archimedes methods, metallographic imaging and CT scanning. They note that successful void detection in CT images, relative to Archimedes method, is subject to the selected size threshold for detection of voids, i.e. a larger threshold will prevent detection of smaller voids. In [14] Wits et al. demonstrated that inspection of AM parts using Archimedes, microscopic and CT methods predict similar porosities, however CT scanning enables part porosity to be quantified.

Two strategies can be used to improve conclusions drawn from layerwise data: fusion of multiple layerwise image sets, or fusion of layerwise images with a separate data stream. Information fusion between layerwise images from homogeneous sensors, defined as similar sensors under different conditions that form a complimentary sensor configuration, provides a higher confidence in the interpretation of an observation. Fusing CT data and layerwise images forms a cooperative sensor configuration built from heterogeneous sensors, non-alike sensors, thereby deriving information that cannot be observed by the sensors individually [15]. Weckenmann et al. [15] illustrated that the use of multisensory data fusion, as compared to single sensor applications, reduces uncertainty in dimensional metrology, and provides a more complete description. Aminzadeh and Kurfess [16] proposed the detection of defects in PBF parts using online visual inspection sensors, in varying sensor configuration, paired with classifiers, potentially neural networks or support vector machines (SVMs), but their proposed strategy was not demonstrated.

Supervised machine learning is generally executed in two steps. First, the system is trained, implying that the parameters of the underlying classification scheme are estimated using a training data set with known labels, i.e. the ground truth. For SVM classification, training entails the construction of a decision boundary that best separates the given training data points based on ground truth labels [17]. Second, the performance of the trained classification scheme and associated decision boundary is tested by generating predicted labels for a previously unseen data set, i.e. the test data set. A formal comparison between predicted labels and ground truth labels of the test data set then reveals the out-of-sample classification performance, which typically includes metrics such as false positive rates and false negative rates. This procedure is commonly referred to as cross-validation.

In this work, a methodology to train an SVM classifier to detect discontinuities from in situ sensor data using labeled ground truth data extracted from post-build CT scans is developed and demonstrated. The overall strategy is illustrated in Fig. 1.

The details of each block in the diagram are detailed in the Section 2. Algorithms for automated acquisition of ground truth labels from CT scan data (a discontinuity vs. a nominal build condition), and for transfer of these labeled data from the CT scan domain into an in situ sensor domain (in this case, the layerwise image domain) are developed. An approach for 3D feature extraction from the in situ sensor data and the implementation of an ensemble classification scheme for discontinuity detection performed in the in situ sensor domain is also developed and presented.

Section 3 details the implementation of the proposed methodologies on a single PBFAM part, whereby a digital single lens reflex (DSLR) camera, serving as the in situ sensor, captures multiple images for each build layer (both post-powder recoat and post-laser exposure) under various lighting conditions. A linear SVM ensemble classifier that fuses visual information extracted from these images is trained using ground truth labels acquired from the CT scan of the part. A cross-validation scheme for the ensemble classifier is implemented in order to formally evaluate classification performance on the set of labeled layerwise image data. Section 4 discusses the results, including a description of the discontinuity geometries identified in the AM part, and the classification results for the detection of discontinuities using in situ images. The final section summarizes the results, and outlines future work.

Section snippets

Methodology

This paper investigates the hypothesis that in situ sensors monitoring a metal PBFAM process, specifically high resolution imaging of build surfaces, can capture features that can be linked to discontinuities or defects in the resultant component. In this work, discontinuities are defined as interruptions in the typical or nominal structure of a material [18]. Discontinuities, such as porosity, can be powder-induced or process-induced, and commonly arise as a result of part solidification, from

Experiment setup and data

A PBFAM build process conducted in an EOS M280 AM system [23] was monitored with an in situ sensor comprising a 36.3-megapixel DSLR camera (Nikon D800E) mounted inside the build chamber. Using combinations of five light sources to generate a total of eight different lighting conditions, the DSLR camera captured multiple images each build layer. The experimental setup is shown in Fig. 4. The various lighting sources added to the system, designated as flash modules, were covered with several

Coordinate transformation and filter dimensions

The transformation of reference points from the CT scan domain into the in situ sensor domain is displayed in Fig. 10. The calculated RMS errors from Eq. (7) were ∼1.75, ∼1.5, and ∼0.75 DSLR voxels in the x, y, and z directions, corresponding to 87.5, 75, and 37.5 μm. Based on these RMS errors and the discussion in Section 2.4, a filter size of 7 × 7 × 3 DSLR voxels was chosen for feature extraction in the in situ sensor domain, yielding Fn=773=147 linearly independent filters.

Anomaly labels in the CT scan and in situ sensor domains

As noted

Conclusion

Effectiveness of the ensemble classifier scheme, including the confirmation of a sufficient sample size, was validated by the similar performance of classification between each cross-validation model. The relative high performance of the classification ensemble demonstrated the potential to discriminate between anomalous and nominal DSLR voxels using in situ sensor modality comprising layerwise imaging collected by a DSLR camera. Implementation of an ensemble classification scheme, paired with

Acknowledgements

This work would not have been possible without the contributions of several individuals. We would like to thank Penn State College of Engineering for supporting Christian Gobert during this project. We would like to thank Naval Air Systems Command for their support. We would also like to thank Ms. Gabrielle Gunderman and Mr. Griffin Jones, from ARL Penn State for their efforts designing the experiments and for performing post-process inspection via 3D computed tomography analysis. Finally, we

References (25)

  • G. Tapia et al.

    A review on process monitoring and control in metal-based additive manufacturing

    J. Manuf. Sci. Eng.

    (2014)
  • W. Frazier

    Metal additive manufacturing: a review

    J. Mater. Eng. Perform.

    (2014)
  • Cited by (357)

    • Artificial intelligence for materials damage diagnostics and prognostics

      2024, Artificial Intelligence in Manufacturing: Concepts and Methods
    View all citing articles on Scopus
    View full text