Loading [MathJax]/extensions/MathMenu.js
Latent Space Segmentation Model for Visual Surface Defect Inspection | IEEE Journals & Magazine | IEEE Xplore

Latent Space Segmentation Model for Visual Surface Defect Inspection


Abstract:

There are a huge number of models that claim to enhance visual surface defect inspection accuracy. However, as these models generally function directly within the pixel s...Show More

Abstract:

There are a huge number of models that claim to enhance visual surface defect inspection accuracy. However, as these models generally function directly within the pixel space, optimizing advanced segmentation techniques frequently demands substantial computational resources and poses challenges for inference on devices with limited computing power. In addition, many current methodologies are deeply reliant on extensive surface defect datasets. In response to these challenges, our research presents a novel approach based on an auto-encoder structure that uses “latent space” to refine defect segmentation. Within the encoder segment of the autoencoder, we’ve incorporated contrastive learning, amplifying both feature extraction and segmentation capabilities. This architectural choice not only tailors the strategy for prompt response scenarios and underscores its precision in high-accuracy applications, but also addresses the challenges posed by the scarcity of defect samples. As a means to assess our approach and better cater to industrial applications that prioritize sample-level accuracy, we introduce innovative sample-level metrics, namely, mostly segmented (MS) and mostly lost (ML). Experiments conducted on the RSDD and Neuseg datasets underscore the strategy’s steadfast performance under diverse data circumstances. Synthesizing the benefits of latent space and contrastive learning, this article delineates proficient methodology for surface defect segmentation.
Article Sequence Number: 5029111
Date of Publication: 28 August 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.