Loading [MathJax]/extensions/MathMenu.js
Pull & Push: Leveraging Differential Knowledge Distillation for Efficient Unsupervised Anomaly Detection and Localization | IEEE Journals & Magazine | IEEE Xplore

Pull & Push: Leveraging Differential Knowledge Distillation for Efficient Unsupervised Anomaly Detection and Localization


Abstract:

Recently, much attention has been paid to segmenting subtle unknown defect regions by knowledge distillation in an unsupervised setting. Most previous studies concentrate...Show More

Abstract:

Recently, much attention has been paid to segmenting subtle unknown defect regions by knowledge distillation in an unsupervised setting. Most previous studies concentrated on guiding the student network to learn the same representations on the normality, neglecting the different behaviors of the abnormality. This leads to a high probability of false detection of subtle defects. To address such an issue, we propose to push representations on abnormal areas of the teacher and student network as far as possible while pulling representations on normal areas as close as possible. Based on this idea, we design an efficient teacher-student model for anomaly detection and localization, which maximizes pixel-wise discrepancies for anomalous regions approximated by data augmentation and simultaneously minimizes discrepancies for pixel-wise normal regions between these two networks. The explicit differential knowledge distillation enlarges the margin between normal representations and abnormal ones in favour of discriminating them. Then, the appropriate small student network is not only efficient, but more importantly, helps inhibit the generalization ability of anomalous patterns when learning normal patterns, facilitating the precise decision boundary. The experimental results on the MVTec AD, Fashion-MNIST, and CIFAR-10 datasets demonstrate that our proposed method achieves better performance than current state-of-the-art (SOTA) approaches. Especially, For the MVTec AD dataset with high resolution images, we achieve 98.1 AUROC% and 93.6 AUPRO% in anomaly localization, outperforming knowledge distillation based SOTA methods by 1.1 AUROC% and 1.5 AUPRO% with a lightweight model.
Page(s): 2176 - 2189
Date of Publication: 07 November 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.