I. Introduction
Currently, machine learning algorithms are widely used in various industries [1] –[3]. At the same time, their security has also received widespread attention from researchers. It has been shown that machine learning is potentially vulnerable to malicious attacks in both the training and inference phases, which may render the model unusable or skewed according to the attacker’s intent, with the main threat in the training phase being data poisoning attacks and the main threat in the inference phase being evasion attack. Training phase, but also a few attacks in the inference phase, so some of the backdoor attacks belong to a special kind of data poisoning attack. Data poisoning attacks can be broadly classified into two categories: availability attack and targeted attack, as shown in Fig 1. An availability attack aims to corrupt the model classifier as much as possible; a targeted attack aims to have an impact on specific data points and manipulate specific output results. In addition, two special attacks, clean-label attack [4] and label-flipping attack [5], have been studied according to the control over the training data labels.