By Topic

Design of Recurrent Neural Networks for Solving Constrained Least Absolute Deviation Problems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Xiaolin Hu ; Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China ; Changyin Sun ; Bo Zhang

Recurrent neural networks for solving constrained least absolute deviation (LAD) problems or L1-norm optimization problems have attracted much interest in recent years. But so far most neural networks can only deal with some special linear constraints efficiently. In this paper, two neural networks are proposed for solving LAD problems with various linear constraints including equality, two-sided inequality and bound constraints. When tailored to solve some special cases of LAD problems in which not all types of constraints are present, the two networks can yield simpler architectures than most existing ones in the literature. In particular, for solving problems with both equality and one-sided inequality constraints, another network is invented. All of the networks proposed in this paper are rigorously shown to be capable of solving the corresponding problems. The different networks designed for solving the same types of problems possess the same structural complexity, which is due to the fact these architectures share the same computing blocks and only differ in connections between some blocks. By this means, some flexibility for circuits realization is provided. Numerical simulations are carried out to illustrate the theoretical results and compare the convergence rates of the networks.

Published in:

Neural Networks, IEEE Transactions on  (Volume:21 ,  Issue: 7 )