Skip to Main Content
Most heuristic algorithms for building decision trees are based on the entropy of information. In this article, we introduce a new heuristic algorithm for decision tree generation based on the importance of attribute contributing to the classification, and apply the algorithm to several crisp databases. When the expanded attribute is selected in a specified node, we may have two choices, i.e., sensitive and insensitive attribute. Usually the sensitive attribute is selected for branching the node, but the insensitive attribute is ignored. We compare the two methods from robustness aspects by conducting experiments on several databases, in which the ID3's robustness is also included. The result indicates that the insensitive method is the most robust one.