Page 54 - MSAM-4-3
P. 54
Materials Science in Additive Manufacturing AI-driven defect detection in metal AM
Table 9. Previous studies on object detection models
Reference Research object Model (s) used Average precision (%) References
Yin et al., 2025 Defect localization in PBF-LB Faster R-CNN and YOLOv5 Faster R-CNN: 46.25 This work
YOLOv5: 81.5
Paraskevoudis et al., 2020 Detection of stringing defects in FFF 3D printing SSD with VGG16 44 46
Scime et al., 2020 Powder bed anomaly detection Dynamic Segmentation CNN Pixel-wise accuracy: >90 49
Cannizzaro et al., 2021 PBF defect detection Computer Vision+U-Net ≥75 19
Wen et al., 2021 Detection of cracks and pores in PBF-LB YOLOv4 (detection) and ~50 34
Detectron2 (segmentation)
Wang et al., 2024 Small defect detection in metallic AM based on DC-RCNN 73.3 47
CT images
Dong et al., 2025 Internal defect detection in AM 6061 aluminum YOLOv5 93.1% 48
alloy using laser ultrasound
Abbreviations: AM: Additive manufacturing; CNN: Convolutional neural network; CT: Computed tomography; DC-R-CNN: Depth-connected
region-based convolutional neural network; FFF: Fused filament fabrication; PBF-LB: Laser-based powder bed fusion; SSD: Single shot detector.
depending on factors such as dataset size, defect types, only C-shaped components (AP up to 73.3%). Dong et al.
48
and image quality. Although direct comparisons are not further validated the high accuracy of YOLOv5 (93.1%) on
conclusive due to these differences, the cited results provide laser ultrasonic data, though the dataset was purely simulated
a general context for interpreting our findings. through COMSOL and limited comparisons beyond the
YOLO family. In contrast, this study used a larger and more
Previous studies employed traditional ML algorithms,
such as Support Vector Machine (SVM) and random diverse real-world powder bed dataset, demonstrating that
optimized YOLOv5 achieves higher AP across multiple
forest, 16,29,31 or simpler deep learning models, such as basic defect types. This highlights the broader potential of object
7,25
CNNs for image classification tasks. In contrast, this detection models for industrial AM applications.
study adopts more sophisticated architectures, ResNet50
and EfficientNetV2B0. This study achieved near-perfect Notably, some studies employed pixel-level annotations
accuracy by integrating transfer learning methods to and semantic segmentation models, achieving higher AP
enhance model performance, outperforming many earlier values (≥75% and ≥90%, respectively). 19,49 These findings
studies. In addition, it was observed that EfficientNetV2B0 suggest that pixel-level annotations can be considered to
not only maintained a very high accuracy rate but also enhance label quality and improve object detection models’
50
converged faster and demonstrated better stability. accuracy. Pixel-level annotation in AM requires domain-
specific expertise and is prohibitively time-consuming
Unlike most existing works that focus solely on image for thousands of images. Consequently, most existing
classification, this study systematically evaluated both segmentation studies are conducted on relatively small
classification and object detection models using a unified, datasets and result in highly task-specific models with
real-world AM dataset and a consistent training pipeline for limited generalizability. Moreover, segmentation models
the 1 time. 44,45 By integrating recent architectures such as are computationally intensive, requiring greater computing
st
EfficientNetV2B0 and YOLOv5, which offer both accuracy resources and longer training times, which limits their real-
and computational efficiency, the proposed dual-task time applicability in edge or online inspection scenarios.
framework addresses the practical demands of AM process Nonetheless, segmentation remains important for deeper
monitoring and provides a valuable reference for future model analysis of defect formation. Future work will explore
selection and deployment in industrial defect detection. advanced segmentation techniques to support root cause
Compared to image classification tasks, the application investigation and closed-loop quality control in AM.
of object detection models for defect localization in AM
remains highly underexplored, as illustrated in Table 9. 5. Conclusion
Existing studies demonstrate that models trained with This study comparatively evaluated two image classification
conventional annotations typically achieve AP values in models and two object detection models for defect
the range of 40 – 50%. 34,46 A recent study by Wang et al. identification and localization on a PBF-LB image dataset.
47
proposed a depth-connected region-based (DC-RCNN) The key findings are summarized below:
model for small defect detection on computed tomography • ResNet50 and EfficientNetV2B0 achieved over 99%
images, but its performance was limited by a small dataset of accuracy in classifying recoating defects with minimal
Volume 4 Issue 3 (2025) 12 doi: 10.36922/MSAM025150022

