Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Constrained Dimensionality Reduction Using a Mixed-Norm Penalty Function with Neural Networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Huiwen Zeng ; Synopsys, Inc., Hillsboro, OR, USA ; Trussell, H.J.

Reducing the dimensionality of a classification problem produces a more computationally-efficient system. Since the dimensionality of a classification problem is equivalent to the number of neurons in the first hidden layer of a network, this work shows how to eliminate neurons on that layer and simplify the problem. In the cases where the dimensionality cannot be reduced without some degradation in classification performance, we formulate and solve a constrained optimization problem that allows a trade-off between dimensionality and performance. We introduce a novel penalty function and combine it with bilevel optimization to solve the constrained problem. The performance of our method on synthetic and applied problems is superior to other known penalty functions such as weight decay, weight elimination, and Hoyer's function. An example of dimensionality reduction for hyperspectral image classification demonstrates the practicality of the new method. Finally, we show how the method can be extended to multilayer and multiclass neural network problems.

Published in:

Knowledge and Data Engineering, IEEE Transactions on  (Volume:22 ,  Issue: 3 )