Full Length ArticleApplication of supervised machine learning for defect detection during metallic powder bed fusion additive manufacturing using high resolution imaging.
Introduction
Metallic additive manufacturing (AM) is a process in which near-net-shape parts are built in a layer-by-layer manner from powder alloys directly from digital files. Powder bed fusion AM (PBFAM) is a form of AM in which a laser selectively melts consecutive layers of metal powder placed on a build platform inside a build chamber [1]. After each melt cycle, a new layer of metal powder is spread across the build platform by a recoater blade, rake, or roller. PBFAM parts are made up of hundreds or thousands of layers (typically ∼20–60 μm layer thickness) dependent on part dimensions and material; and build times range from hours to days [2]. However, discontinuities in PBFAM parts—e.g. incomplete fusion, porosity, cracks, or inclusions—may arise from contamination or irregularities in powder recoating, laser-material interaction, or part solidification [3], and are thus a common concern, negatively affecting mechanical properties [[4], [5], [6]].
Detection and/or mitigation of such discontinuities may, however, be possible through monitoring of the AM build process. Indeed, research on process monitoring of AM systems has increased in the past decade, driven by AM’s appeal for manufacturing of complex high-value and low-volume production parts [7]. A common approach to process monitoring is measurement of melt pool characteristics. Melt-pool monitoring can provide insight into part quality and process phenomena, however melt-pool monitoring systems require tight integration with high speed scanners, and analysis can be complicated due to challenges when calibrating the emissivity of the melt pool [7]. Alternatively, imaging of build layers before or after a laser exposure offers an inexpensive and system-independent approach for defect detection compared to melt-pool metrology and post-evaluation techniques [8]. Here, this strategy is termed layerwise imaging.
Layerwise process monitoring has been attempted using visible-light and infrared (IR) imaging. Kleszczynski et al. [9] presented a system-independent layerwise-imaging system, with a 29-megapixel charge-coupled device (CCD) camera, which captured process irregularities on in situ surfaces of PBFAM parts. Jacobsmühlen et al. [10] expands on Kleszczynski et al. by building a connection between the detection of these surface irregularities and mechanical part performance. Mireles et al. [11] presented a low-resolution IR-layerwise imaging system on a PBFAM process to observe seeded void discontinuities in parts. Geometries of porous discontinuities were measured with contour tracing in the IR-layerwise and CT images. A formal comparison revealed that a substantial difference of measured geometry existed between the two domains. Schwerdtfeger et al. [12] demonstrated a correspondence between low-resolution IR-layerwise imaging and metallographic imaging of electron-beam-based PBFAM parts.
Despite the promise of layerwise monitoring, post-process inspection is still the de facto method for defect detection in AM parts. Spierings et al. [13] compared and contrasted techniques for post-process porosity analysis of PBF parts, including Archimedes methods, metallographic imaging and CT scanning. They note that successful void detection in CT images, relative to Archimedes method, is subject to the selected size threshold for detection of voids, i.e. a larger threshold will prevent detection of smaller voids. In [14] Wits et al. demonstrated that inspection of AM parts using Archimedes, microscopic and CT methods predict similar porosities, however CT scanning enables part porosity to be quantified.
Two strategies can be used to improve conclusions drawn from layerwise data: fusion of multiple layerwise image sets, or fusion of layerwise images with a separate data stream. Information fusion between layerwise images from homogeneous sensors, defined as similar sensors under different conditions that form a complimentary sensor configuration, provides a higher confidence in the interpretation of an observation. Fusing CT data and layerwise images forms a cooperative sensor configuration built from heterogeneous sensors, non-alike sensors, thereby deriving information that cannot be observed by the sensors individually [15]. Weckenmann et al. [15] illustrated that the use of multisensory data fusion, as compared to single sensor applications, reduces uncertainty in dimensional metrology, and provides a more complete description. Aminzadeh and Kurfess [16] proposed the detection of defects in PBF parts using online visual inspection sensors, in varying sensor configuration, paired with classifiers, potentially neural networks or support vector machines (SVMs), but their proposed strategy was not demonstrated.
Supervised machine learning is generally executed in two steps. First, the system is trained, implying that the parameters of the underlying classification scheme are estimated using a training data set with known labels, i.e. the ground truth. For SVM classification, training entails the construction of a decision boundary that best separates the given training data points based on ground truth labels [17]. Second, the performance of the trained classification scheme and associated decision boundary is tested by generating predicted labels for a previously unseen data set, i.e. the test data set. A formal comparison between predicted labels and ground truth labels of the test data set then reveals the out-of-sample classification performance, which typically includes metrics such as false positive rates and false negative rates. This procedure is commonly referred to as cross-validation.
In this work, a methodology to train an SVM classifier to detect discontinuities from in situ sensor data using labeled ground truth data extracted from post-build CT scans is developed and demonstrated. The overall strategy is illustrated in Fig. 1.
The details of each block in the diagram are detailed in the Section 2. Algorithms for automated acquisition of ground truth labels from CT scan data (a discontinuity vs. a nominal build condition), and for transfer of these labeled data from the CT scan domain into an in situ sensor domain (in this case, the layerwise image domain) are developed. An approach for 3D feature extraction from the in situ sensor data and the implementation of an ensemble classification scheme for discontinuity detection performed in the in situ sensor domain is also developed and presented.
Section 3 details the implementation of the proposed methodologies on a single PBFAM part, whereby a digital single lens reflex (DSLR) camera, serving as the in situ sensor, captures multiple images for each build layer (both post-powder recoat and post-laser exposure) under various lighting conditions. A linear SVM ensemble classifier that fuses visual information extracted from these images is trained using ground truth labels acquired from the CT scan of the part. A cross-validation scheme for the ensemble classifier is implemented in order to formally evaluate classification performance on the set of labeled layerwise image data. Section 4 discusses the results, including a description of the discontinuity geometries identified in the AM part, and the classification results for the detection of discontinuities using in situ images. The final section summarizes the results, and outlines future work.
Section snippets
Methodology
This paper investigates the hypothesis that in situ sensors monitoring a metal PBFAM process, specifically high resolution imaging of build surfaces, can capture features that can be linked to discontinuities or defects in the resultant component. In this work, discontinuities are defined as interruptions in the typical or nominal structure of a material [18]. Discontinuities, such as porosity, can be powder-induced or process-induced, and commonly arise as a result of part solidification, from
Experiment setup and data
A PBFAM build process conducted in an EOS M280 AM system [23] was monitored with an in situ sensor comprising a 36.3-megapixel DSLR camera (Nikon D800E) mounted inside the build chamber. Using combinations of five light sources to generate a total of eight different lighting conditions, the DSLR camera captured multiple images each build layer. The experimental setup is shown in Fig. 4. The various lighting sources added to the system, designated as flash modules, were covered with several
Coordinate transformation and filter dimensions
The transformation of reference points from the CT scan domain into the in situ sensor domain is displayed in Fig. 10. The calculated RMS errors from Eq. (7) were ∼1.75, ∼1.5, and ∼0.75 DSLR voxels in the x, y, and z directions, corresponding to 87.5, 75, and 37.5 μm. Based on these RMS errors and the discussion in Section 2.4, a filter size of 7 × 7 × 3 DSLR voxels was chosen for feature extraction in the in situ sensor domain, yielding linearly independent filters.
Anomaly labels in the CT scan and in situ sensor domains
As noted
Conclusion
Effectiveness of the ensemble classifier scheme, including the confirmation of a sufficient sample size, was validated by the similar performance of classification between each cross-validation model. The relative high performance of the classification ensemble demonstrated the potential to discriminate between anomalous and nominal DSLR voxels using in situ sensor modality comprising layerwise imaging collected by a DSLR camera. Implementation of an ensemble classification scheme, paired with
Acknowledgements
This work would not have been possible without the contributions of several individuals. We would like to thank Penn State College of Engineering for supporting Christian Gobert during this project. We would like to thank Naval Air Systems Command for their support. We would also like to thank Ms. Gabrielle Gunderman and Mr. Griffin Jones, from ARL Penn State for their efforts designing the experiments and for performing post-process inspection via 3D computed tomography analysis. Finally, we
References (25)
- et al.
Influence of defects on mechanical properties of Ti-6Al-4V components produced by selective laser melting and electron beam melting
Mater. Des.
(2015) - et al.
Porosity testing methods for the quality assessment of selective laser melted parts
CIRP Ann. Manuf. Technol.
(2016) - et al.
Multisensor data fusion in dimensional metrology
CIRP Ann. Manuf. Technol.
(2009) - et al.
Review of in-situ process monitoring and in-situ metrology for metal additive manufacturing
Mater. Des.
(2016) Data ClusteringL 50 years beyond k-means
Pattern Recognit. Lett.
(2010)ASTM International
Standard Terminology for Additive Manufacturing-General Principles-Terminology, ASTM F42 Committee
(2015)- et al.
The status, challenges, and future of additive manufacturing in engineering
Comput.-Aided Des.
(2016) - et al.
The metallurgy and processing science of metal additive manufacturing
Int. Mater. Rev.
(2016) - et al.
Relationship between unit cell type and porosity and the fatigue behavior of selective laser melted meta-biomaterials
J. Mech. Behav. Biomed. Mater.
(2015) - et al.
The effect of manufacturing defects on the fatigue behavior of Ti-6Al-4V specimens fabricated using selective laser melting
Adv. Mater. Res.
(2014)
A review on process monitoring and control in metal-based additive manufacturing
J. Manuf. Sci. Eng.
Metal additive manufacturing: a review
J. Mater. Eng. Perform.
Cited by (357)
Time-series vision transformer based on cross space-time attention for fault diagnosis in fused deposition modelling with reconstruction of layer-wise data
2024, Journal of Manufacturing ProcessesIn-process and post-process strategies for part quality assessment in metal powder bed fusion: A review
2024, Journal of Manufacturing SystemsAI-based additive manufacturing for future food: Potential applications, challenges and possible solutions
2024, Innovative Food Science and Emerging TechnologiesMulti-sensor monitoring of powder melting states via melt pool optical emission signals during laser-based powder bed fusion
2024, Optics and Laser TechnologyArtificial intelligence for materials damage diagnostics and prognostics
2024, Artificial Intelligence in Manufacturing: Concepts and Methods