Skip to Main Content
Social discrimination against certain sensitive groups within society (e.g., females, blacks, minorities) is prohibited by law in many countries. To prevent discrimination arising from the use of discriminatory data, recent data mining research has focused on methods for making classifiers learned over discriminatory data discrimination-aware. Most of these methods have been tested on standard classification datasets that have been tweaked for discrimination analysis rather than over actual discriminatory data. In this paper, we study discrimination-aware classification when applied to a real world dataset of Statistics Netherlands, which is a census body in the Netherlands. Specifically, we consider the use of classifiers for predicting whether an individual is a crime suspect, or not, to support law enforcement and security agencies' decision making. Our results show that discrimination does exist in real world datasets and blind use of classifiers learned over such datasets can exacerbate the discrimination problem. We demonstrate that discrimination-aware classification methods can mitigate the discriminatory effects and that they lead to rational and legally acceptable decisions.