By Topic

Projection Vector Machine: One-stage learning algorithm from high-dimension small-sample data

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Wanyu Deng ; Dept. of Comput. Sci. & Technol., Xi''an Jiaotong Univ., Xi''an, China ; Qinghua Zheng ; Shiguo Lian ; Lin Chen
more authors

The presence of fewer samples and large number of input features increases the complexity of the classifier and degrades the stability. Thus, dimension reduction was always carried before supervised learning algorithms such as neural network. This two-stage framework is somewhat redundant in dimension reduction and network training. This paper proposes a novel one-stage learning algorithm for high-dimension small-sample data, called Projection Vector Machine (PVM), which combines dimension reduction with network training and removes the redundancy. Through dimension reduction operation such as singular vector decomposition (SVD), we not only reduce the dimension but also obtain the size of single-hidden layer feedforward neural network (SLFN) and input weight values simultaneously. This size-fixed network will become linear programming system and thus the output weights can be determined by simple least square method. Unlike traditional backpropagation feedforward neural network (BP), parameters in PVM don't need iterative tuning and thus its training speed is much faster than BP. Unlike extreme learning machine (ELM) proposed by Huang [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing 70 (2006) 489-501] which assigns input weights randomly, PVM's input weights are ranked by singular values and select the optimal weights order by singular value. We give proof that PVM is a universal approximator for high-dimension small-sample data. Experimental results show that the proposed one-stage algorithm PVM is faster than two-stage learning approach such as SVD+BP and SVD+ELM.

Published in:

Neural Networks (IJCNN), The 2010 International Joint Conference on

Date of Conference:

18-23 July 2010