Skip to Main Content
This paper deals with the problem of training an Artificial Neural Network (ANN) when the data sets are very imbalanced. Most learning algorithms, including ANN, are designed for well-balanced data and do not work properly on imbalanced ones. Of the approaches proposed for dealing with this problem, we are interested in the re-sampling ones, since they are algorithm-independent. We have recently proposed a new under-sampling technique for the two-class problem, called Non-Target Incremental Learning (NTIL), which has shown a good performance with SVM, improving results and training speed. Here, the advantages of using this technique with ANN are shown. The performance with regard to other popular under-sampling techniques is compared.