Calibrating Probability with Undersampling for Unbalanced Classification | IEEE Conference Publication | IEEE Xplore

Calibrating Probability with Undersampling for Unbalanced Classification


Abstract:

Under sampling is a popular technique for unbalanced datasets to reduce the skew in class distributions. However, it is well-known that under sampling one class modifies ...Show More

Abstract:

Under sampling is a popular technique for unbalanced datasets to reduce the skew in class distributions. However, it is well-known that under sampling one class modifies the priors of the training set and consequently biases the posterior probabilities of a classifier. In this paper, we study analytically and experimentally how under sampling affects the posterior probability of a machine learning model. We formalize the problem of under sampling and explore the relationship between conditional probability in the presence and absence of under sampling. Although the bias due to under sampling does not affect the ranking order returned by the posterior probability, it significantly impacts the classification accuracy and probability calibration. We use Bayes Minimum Risk theory to find the correct classification threshold and show how to adjust it after under sampling. Experiments on several real-world unbalanced datasets validate our results.
Date of Conference: 07-10 December 2015
Date Added to IEEE Xplore: 11 January 2016
Print ISBN:978-1-4799-7560-0
Conference Location: Cape Town, South Africa

Contact IEEE to Subscribe

References

References is not available for this document.