By Topic

Stationary Fuzzy Fokker–Planck Learning for Derivative-Free Optimization

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Mohit Kumar ; Center for Life Science Automation , Germany ; Norbert Stoll ; Kerstin Thurow ; Regina Stoll

Stationary fuzzy Fokker-Planck learning (SFFPL) is a recently introduced computational method that applies fuzzy modeling to solve optimization problems. This study develops a concept of applying SFFPL-based computations for nonlinear constrained optimization. We consider the development of SFFPL-based optimization algorithms which do not require derivatives of the objective function and of the constraints. The sequential penalty approach was used to handle the inequality constraints. It was proved under some standard assumptions that the carefully designed SFFPL-based algorithms converge asymptotically to the stationary points. The convergence proofs follow a simple mathematical approach and invoke mean-value theorem. The algorithms were evaluated on the test problems with the number of variables up to 50. The performance comparison of the proposed algorithms with some of the standard optimization algorithms further justifies our approach. The SFFPL-based optimization approach, due to its novelty, could possibly be extended to several research directions.

Published in:

IEEE Transactions on Fuzzy Systems  (Volume:21 ,  Issue: 2 )