Robust Defect Detection for Binder Jet 3D Printing with Semi-Siamese Neural Networks

Y. Niu
University of Connecticut,
United States

Keywords: defect localization, change detection, domain adaptation, robust modeling, binder jet 3D printing


Automatic defect detection for 3D printing processes, which shares many characteristics with change detection problems, is a vital step for quality control of 3D printed products. However, there are some critical challenges in the current state of practice. First, existing methods for computer vision-based process monitoring typically work well only under specific camera viewpoints and lighting situations, requiring expensive pre-processing, alignment, and camera setups. Second, many defect detection techniques are specific to pre-defined defect patterns and/or print schematics. Third, current available deep neural networks require comprehensive labelled datasets that are both costly to obtain and may be limited in coverage of defect types. In this work, we approach the defect detection problem using a novel Semi-Siamese deep neural network model which is based on the few-shot learning framework. Our model is able to learn a similarity function to map the similarity between two images instead of learning feature maps for specific classes of defects as in common deep neural networks. As a result, defects can be predicted by our model by directly comparing a reference schematic of the desired print and a camera image of the achieved print which are from different image domains. This novel framework combined with data augmentation greatly reduces the training dataset requirements. In this work, we started with only a limited dataset of 57 pairs of experimental images of printed material and their corresponding reference schematics. After data augmentation, the final dataset consists of 16400 training, 560 validation, and 560 test images, where 41 underlying schematic images are used in the training data, and 8 schematic images each were used for validation and testing. Data augmentation is used to give the data a varied set of perturbations, simulating the perturbations that may exist in experimental camera images. The types of perturbations we use in the data augmentation include zoom, rotation, shear, position (width and height) shift, and changing colour contrast. Our model is designed to enable comparison of heterogeneous images from different domains while being robust against perturbations in the imaging setup. On our experimental data set, we can achieve an F1 score higher than 0.9. Compared with other existing methods, our novel Semi-Siamese network has a relatively simple structure. Crucially, we show that our simple architecture, which is easy to pre-train for enhanced performance on new datasets, outperforms more complex state-of-the-art approaches based on generative adversarial networks and transformers. At the same time, this characteristic gives our model faster training and loading times. Even tested on a standard MacBook Pro, defect localization predictions can be made in less than half a second, demonstrating the efficacy of using our method for in-situ defect detection in 3D printing.