By Topic

Neural Network for Nonsmooth, Nonconvex Constrained Minimization Via Smooth Approximation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Wei Bian ; Department of Mathematics, Harbin Institute of Technology, Harbin, China ; Xiaojun Chen

A neural network based on smoothing approximation is presented for a class of nonsmooth, nonconvex constrained optimization problems, where the objective function is nonsmooth and nonconvex, the equality constraint functions are linear and the inequality constraint functions are nonsmooth, convex. This approach can find a Clarke stationary point of the optimization problem by following a continuous path defined by a solution of an ordinary differential equation. The global convergence is guaranteed if either the feasible set is bounded or the objective function is level bounded. Specially, the proposed network does not require: 1) the initial point to be feasible; 2) a prior penalty parameter to be chosen exactly; 3) a differential inclusion to be solved. Numerical experiments and comparisons with some existing algorithms are presented to illustrate the theoretical results and show the efficiency of the proposed network.

Published in:

IEEE Transactions on Neural Networks and Learning Systems  (Volume:25 ,  Issue: 3 )