By Topic

New Radial Basis Function Interior Point Method Learning Rule for Rough Set Reduct Detection

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Meghabghab, G. ; Dept. of Comput. Sci. Technol., Roane State, Oak Ridge, TN

Rough set based approach for knowledge discovery employs a greedy algorithm technique for reducing search space in order to extract a reduct of a given decision table. Given than an exhaustive search over all possible attribute combinations will require time that is exponential in the number of attributes, it may not be computationally feasible to find a reduct. We will turn reduct detection into a classification problem since in the elementary set approximation of an unknown concept for example, an elementary set is mapped to the positive region of an unknown concept if its degree of membership is greater than a user defined threshold. The latter idea leads us to consider a generalized RBF neural network which uses radial basis function based on Cover's Theorem of 1965 which states that a "non-linearly separable pattern classification problem in a high-dimension space is more likely to be linearly separable than in a low-dimensional space- the reason for making the dimension of the hidden layer in RBF network high". The generalized RBF neural network has a number of nodes at the hidden layer equal to M, where M is smaller than the number of training patterns N. At the output layer, the linear weights associated and the position of the centers of the radial basis functions and the norm weighting matrix associated with the hidden layer are all unknown parameters that have to be learned. We used RBF with an interior point method (IPM) to evaluate the promise of interior point method to radial basis functions. We trained the centers, spreads and the weights using interior point methods as follows. The learning error function E can be approximated by a polynomial P(x) of degree 1, in a neighborhood of xi: E cong P(x) equiv E(xi) + gT(x-xi) N g =nablaxE=SigmanablaxEj j=1. So the linear programming problem for a given search direction has as a solution: minimize gT(x-xi) su- - bject to -alpha o les x-xilesalphao where o is the vector of all ones with the same dimension as the state vector x and alpha>0 is a constant. In this research we rely on the IPM method of Meghabghab and Nasr (1999) and apply it to search for the direction that will minimize the error corresponding to the weights, the error corresponding to the centers of the neurons, the error corresponding to the spreads of the neurons. The linear interior point method and its dual can be expressed as follows: minimize gTx subject to x+z=b x,zges0, maximize bTy subject sx=c-yges0 sz=-yges0 where x-xi+alpha0 =u and b=2alpha0. We apply this new RBF IPM learning rule to the heart disease training provided by the RSES of the Warsaw Institute of Poland. The training is made out of N=8000 samples of m0=13 variables represented in 80 examplars corresponding to rules from 3 variables to 8 variables per rules. These variables can have a different number of sampling values. Results show that finding rules of the form X rArr Y that are reducts are computationally bounded to O((m0m1/N)log(m1N)), where m0 is the number of input nodes(equal to the number of variables) and m1 is the number of hidden units(smaller than N), and N is the number of training examples. These results are better than the O(m0 2N2) already known in the literature

Published in:

Fuzzy Information Processing Society, 2006. NAFIPS 2006. Annual meeting of the North American

Date of Conference:

3-6 June 2006