**1**Author(s)

Rough set based approach for knowledge discovery employs a greedy algorithm technique for reducing search space in order to extract a reduct of a given decision table. Given than an exhaustive search over all possible attribute combinations will require time that is exponential in the number of attributes, it may not be computationally feasible to find a reduct. We will turn reduct detection into a classification problem since in the elementary set approximation of an unknown concept for example, an elementary set is mapped to the positive region of an unknown concept if its degree of membership is greater than a user defined threshold. The latter idea leads us to consider a generalized RBF neural network which uses radial basis function based on Cover's Theorem of 1965 which states that a "non-linearly separable pattern classification problem in a high-dimension space is more likely to be linearly separable than in a low-dimensional space- the reason for making the dimension of the hidden layer in RBF network high". The generalized RBF neural network has a number of nodes at the hidden layer equal to M, where M is smaller than the number of training patterns N. At the output layer, the linear weights associated and the position of the centers of the radial basis functions and the norm weighting matrix associated with the hidden layer are all unknown parameters that have to be learned. We used RBF with an interior point method (IPM) to evaluate the promise of interior point method to radial basis functions. We trained the centers, spreads and the weights using interior point methods as follows. The learning error function E can be approximated by a polynomial P(x) of degree 1, in a neighborhood of x_{i}: E cong P(x) equiv E(x_{i}) + g^{T}(x-x_{i}) N g =nabla_{x}E=Sigmanabla_{x}E_{j} j=1. So the linear programming problem for a given search direction has as a solution: minimize g^{T}(x-x_{i}) su- - bject to -alpha _{o} les x-x_{i}lesalpha_{o} where o is the vector of all ones with the same dimension as the state vector x and alpha>0 is a constant. In this research we rely on the IPM method of Meghabghab and Nasr (1999) and apply it to search for the direction that will minimize the error corresponding to the weights, the error corresponding to the centers of the neurons, the error corresponding to the spreads of the neurons. The linear interior point method and its dual can be expressed as follows: minimize g^{T}x subject to x+z=b x,zges0, maximize b^{T}y subject s_{x}=c-yges0 s_{z}=-yges0 where x-x_{i}+alpha_{0} =u and b=2alpha_{0}. We apply this new RBF IPM learning rule to the heart disease training provided by the RSES of the Warsaw Institute of Poland. The training is made out of N=8000 samples of m_{0}=13 variables represented in 80 examplars corresponding to rules from 3 variables to 8 variables per rules. These variables can have a different number of sampling values. Results show that finding rules of the form X rArr Y that are reducts are computationally bounded to O((m_{0}m_{1}/N)log(m_{1}N)), where m_{0} is the number of input nodes(equal to the number of variables) and m_{1} is the number of hidden units(smaller than N), and N is the number of training examples. These results are better than the O(m_{0} ^{2}N^{2}) already known in the literature

- Page(s):
- 131 - 136
- E-ISBN :
- 1-4244-0363-4
- Print ISBN:
- 1-4244-0362-6
- INSPEC Accession Number:
- 9484635

- Conference Location :
- Montreal, Que.
- DOI:
- 10.1109/NAFIPS.2006.365873
- Publisher:
- IEEE