Skip to Main Content
Decision trees represent a simple and powerful method of induction from labeled examples. Univariate decision trees consider the value of a single attribute at each node, leading to the splits that are parallel to the axes. In linear multivariate decision trees, all the attributes are used and the partition at each node is based on a linear discriminate (a hyperplane). Nonlinear multivariate decision trees are able to divide the input space arbitrarily based on higher order parameterizations of the discriminate, though one should be aware of the increase of the complexity and the decrease in the number of examples available as moves further down the tree. In omnivariate decision trees, the decision node may be univariate, linear, or nonlinear. Such architecture frees the designer from choosing the appropriate tree type for a given problem. In this paper, we propose to do the model selection at each decision node based on a novel classifiability measure when building omnivariate decision trees. The classifiability measure captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of subproblems at each node. The proposed approach does not require the time consuming statistic tests at each node and therefore does not suffer from as high computational burden as typical model selection algorithm. Our simulation results over several data sets indicate that our approach can achieve at least as good classification accuracy as statistical tests based model select algorithms, but in much faster speed.