Abstract:
Anomaly detection aims at identifying deviant samples from the normal data distribution. Much progress has been made in recent years for anomaly detection with self-super...Show MoreMetadata
Abstract:
Anomaly detection aims at identifying deviant samples from the normal data distribution. Much progress has been made in recent years for anomaly detection with self-supervised representation learning. However, most existing approaches assume the training set contains either only clean normal samples or some labeled abnormal samples. With a contaminated unlabeled training set, the performance is degraded with unclear discrimination between normal and abnormal samples. To address the above challenge, in this paper, we propose a novel unsupervised representation learning framework that takes advantage of the extra information provided by the anomalies in the unlabeled contaminated data for anomaly detection. Specifically, anomaly discrimination learning with pseudo normality score generation and multi-instance contrastive learning is proposed for distinguishing abnormal samples from the unlabeled set. Meanwhile, we combine the discrimination learning with mean-shifted contrastive learning and distribution-shifting transformation classification into a multi-task representation learning framework to improve the stability and robustness of the training. Our proposed method achieves state-of-the-art performance on CIFAR-10, ImageNet-10, MNIST and F-MNIST datasets. Extensive ablation studies further demonstrate the effectiveness of different components of the proposed framework.
Date of Conference: 10-14 July 2023
Date Added to IEEE Xplore: 25 August 2023
ISBN Information: