In data mining post-processing, rule selection using objective rule evaluation indices is one of a useful method to find out valuable knowledge from mined patterns. However, the relationship between an index value and experts' criteria has never been clarified. In this study, we have compared the accuracies of classification learning algorithms for datasets with randomized class distributions and real human evaluations. As a method to determine the relationship, we used rule evaluation models, which are learned from a dataset consisting of objective rule evaluation indices and evaluation labels for each rule. Then, the results show that accuracies of classification learning algorithms with/without criteria of human experts are different on a balanced randomized class distribution. With regarding to the results, we can consider about a way to distinguish randomly evaluated rules using the accuracies of multiple learning algorithms.