Skip to Main Content
Real world data sets are often imbalanced with respect to the class distribution. Classifier design from those data sets is relatively new challenge. The main problem is the lack of positive class patterns in the data sets. To deal with this problem, there are two main approaches. One is to additionally sample minority class patterns (i.e., over-sampling). The other is to sample a part of majority class patterns (i.e., under-sampling). In our previous research, we have proposed a parallel distributed genetics-based machine learning for large data sets. In our method, not only a population but also a training data set is divided into subgroups, respectively. A pair of a sub-population and a training data subset is assigned to an individual CPU core in order to reduce the computation time. In this paper, our parallel distributed approach is applied to imbalanced data sets. The training data subsets are constructed by a composition of subsets divided majority class patterns with the entire set of non-divided minority class patterns. Through computational experiments, we show the effectiveness of our parallel distributed approach with the proposed data subdivision schemes for imbalanced data sets.