By Topic

Classifiability based omnivariate decision trees

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Li, Y. ; Dept. of Comput. Sci., Wayne State Univ., Detroit, MI, USA ; Dong, M.

Decision trees represent a simple and powerful method of induction from labeled examples. Univariate decision trees consider the value of a single attribute at each node, leading to the splits that are parallel to the axes. In linear multivariate decision trees, all the attributes are used and the partition at each node is based on a linear discriminate (a hyperplane). Nonlinear multivariate decision trees are able to divide the input space arbitrarily based on higher order parameterizations of the discriminate, though one should be aware of the increase of the complexity and the decrease in the number of examples available as moves further down the tree. In omnivariate decision trees, the decision node may be univariate, linear, or nonlinear. Such architecture frees the designer from choosing the appropriate tree type for a given problem. In this paper, we propose to do the model selection at each decision node based on a novel classifiability measure when building omnivariate decision trees. The classifiability measure captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of subproblems at each node. The proposed approach does not require the time consuming statistic tests at each node and therefore does not suffer from as high computational burden as typical model selection algorithm. Our simulation results over several data sets indicate that our approach can achieve at least as good classification accuracy as statistical tests based model select algorithms, but in much faster speed.

Published in:

Neural Networks, 2003. Proceedings of the International Joint Conference on  (Volume:4 )

Date of Conference:

20-24 July 2003