By Topic

Neural Networks and Learning Systems, IEEE Transactions on

Issue 2 • Date Feb. 2013

Filter Results

Displaying Results 1 - 21 of 21
  • Table of contents

    Publication Year: 2013 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (115 KB)  
    Freely Available from IEEE
  • IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information

    Publication Year: 2013 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (138 KB)  
    Freely Available from IEEE
  • Stability for Neural Networks With Time-Varying Delays via Some New Approaches

    Publication Year: 2013 , Page(s): 181 - 193
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (396 KB) |  | HTML iconHTML  

    This paper considers the problem of delay-dependent stability criteria for neural networks with time-varying delays. First, by constructing a newly augmented Lyapunov-Krasovskii functional, a less conservative stability criterion is established in terms of linear matrix inequalities. Second, by proposing novel activation function conditions which have not been proposed so far, further improved stability criteria are proposed. Finally, three numerical examples used in the literature are given to show the improvements over the existing criteria and the effectiveness of the proposed idea. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sequential Projection-Based Metacognitive Learning in a Radial Basis Function Network for Classification Problems

    Publication Year: 2013 , Page(s): 194 - 206
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (755 KB) |  | HTML iconHTML  

    In this paper, we present a sequential projection-based metacognitive learning algorithm in a radial basis function network (PBL-McRBFN) for classification problems. The algorithm is inspired by human metacognitive learning principles and has two components: a cognitive component and a metacognitive component. The cognitive component is a single-hidden-layer radial basis function network with evolving architecture. The metacognitive component controls the learning process in the cognitive component by choosing the best learning strategy for the current sample and adapts the learning strategies by implementing self-regulation. In addition, sample overlapping conditions and past knowledge of the samples in the form of pseudosamples are used for proper initialization of new hidden neurons to minimize the misclassification. The parameter update strategy uses projection-based direct minimization of hinge loss error. The interaction of the cognitive component and the metacognitive component addresses the what-to-learn, when-to-learn, and how-to-learn human learning principles efficiently. The performance of the PBL-McRBFN is evaluated using a set of benchmark classification problems from the University of California Irvine machine learning repository. The statistical performance evaluation on these problems proves the superior performance of the PBL-McRBFN classifier over results reported in the literature. Also, we evaluate the performance of the proposed algorithm on a practical Alzheimer's disease detection problem. The performance results on open access series of imaging studies and Alzheimer's disease neuroimaging initiative datasets, which are obtained from different demographic regions, clearly show that PBL-McRBFN can handle a problem with change in distribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing a Local Least-Squares Support Vector Machines-Based Neuro-Fuzzy Model for Nonlinear and Chaotic Time Series Prediction

    Publication Year: 2013 , Page(s): 207 - 218
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (637 KB) |  | HTML iconHTML  

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Radial Basis Function Network Training Using a Nonsymmetric Partition of the Input Space and Particle Swarm Optimization

    Publication Year: 2013 , Page(s): 219 - 230
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (944 KB) |  | HTML iconHTML  

    This paper presents a novel algorithm for training radial basis function (RBF) networks, in order to produce models with increased accuracy and parsimony. The proposed methodology is based on a nonsymmetric variant of the fuzzy means (FM) algorithm, which has the ability to determine the number and locations of the hidden-node RBF centers, whereas the synaptic weights are calculated using linear regression. Taking advantage of the short computational times required by the FM algorithm, we wrap a particle swarm optimization (PSO) based engine around it, designed to optimize the fuzzy partition. The result is an integrated framework for fully determining all the parameters of an RBF network. The proposed approach is evaluated through its application on 12 real-world and synthetic benchmark datasets and is also compared with other neural network training techniques. The results show that the RBF network models produced by the PSO-based nonsymmetric FM algorithm outperform the models produced by the other techniques, exhibiting higher prediction accuracies in shorter computational times, accompanied by simpler network structures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compositional Generative Mapping for Tree-Structured Data—Part II: Topographic Projection Model

    Publication Year: 2013 , Page(s): 231 - 247
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1072 KB) |  | HTML iconHTML  

    We introduce GTM-SD (Generative Topographic Mapping for Structured Data), which is the first compositional generative model for topographic mapping of tree-structured data. GTM-SD exploits a scalable bottom-up hidden-tree Markov model that was introduced in Part I of this paper to achieve a recursive topographic mapping of hierarchical information. The proposed model allows efficient exploitation of contextual information from shared substructures by a recursive upward propagation on the tree structure which distributes substructure information across the topographic map. Compared to its noncompositional generative counterpart, GTM-SD is shown to allow the topographic mapping of the full sample tree, which includes a projection onto the lattice of all the distinct subtrees rooted in each of its nodes. Experimental results show that the continuous projection space generated by the smooth topographic mapping of GTM-SD yields a finer grained discrimination of the sample structures with respect to the state-of-the-art recursive neural network approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Multitemplate Learning for Structured Prediction

    Publication Year: 2013 , Page(s): 248 - 261
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (610 KB) |  | HTML iconHTML  

    Conditional random fields (CRF) and structural support vector machines (structural SVM) are two state-of-the-art methods for structured prediction that captures the interdependencies among output variables. The success of these methods is attributed to the fact that their discriminative models are able to account for overlapping features on all input observations. These features are usually generated by applying a given set of templates on labeled data, but improper templates may lead to degraded performance. To alleviate this issue, in this paper we propose a novel multiple template learning paradigm to learn structured prediction and the importance of each template simultaneously, so that hundreds of arbitrary templates could be added into the learning model without caution. This paradigm can be formulated as a special multiple kernel learning problem with an exponential number of constraints. Then we introduce an efficient cutting-plane algorithm to solve this problem in the primal and present its convergence. We also evaluate the proposed learning paradigm on two widely studied structured prediction tasks, i.e., sequence labeling and dependency parsing. Extensive experimental results show that the proposed method outperforms CRFs and structural SVMs because of exploiting the importance of each template. Complexity analysis and empirical results also show that the proposed method is more efficient than Online multikernel learning on very sparse and high-dimensional data. We further extend this paradigm for structured prediction using generalized p-block norm regularization with p >; 1, and experiments show competitive performances when p ∈ [1,2). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formulating Robust Linear Regression Estimation as a One-Class LDA Criterion: Discriminative Hat Matrix

    Publication Year: 2013 , Page(s): 262 - 273
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1289 KB) |  | HTML iconHTML  

    Linear discriminant analysis, such as Fisher's criterion, is a statistical learning tool traditionally devoted to separating a training dataset into two or even several classes by the way of linear decision boundaries. In this paper, we show that this tool can formalize the robust linear regression problem as a robust estimator will do. More precisely, we develop a one-class Fischer's criterion in which the maximization provides both the regression parameters and the separation of the data in two classes: typical data and atypical data or outliers. This new criterion is built on the statistical properties of the subspace decomposition of the hat matrix. From this angle, we improve the discriminative properties of the hat matrix which is traditionally used as outlier diagnostic measure in linear regression. Naturally, we call this new approach discriminative hat matrix. The proposed algorithm is fully nonsupervised and needs only the initialization of one parameter. Synthetic and real datasets are used to study the performance both in terms of regression and classification of the proposed approach. We also illustrate its potential application to image recognition and fundamental matrix estimation in computer vision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy-Neural-Network Inherited Sliding-Mode Control for Robot Manipulator Including Actuator Dynamics

    Publication Year: 2013 , Page(s): 274 - 287
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1540 KB) |  | HTML iconHTML  

    This paper presents the design and analysis of an intelligent control system that inherits the robust properties of sliding-mode control (SMC) for an n-link robot manipulator, including actuator dynamics in order to achieve a high-precision position tracking with a firm robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is briefy introduced. Then, a conventional SMC scheme is developed for the joint position tracking of robot manipulators. Moreover, a fuzzy-neural-network inherited SMC (FNNISMC) scheme is proposed to relax the requirement of detailed system information and deal with chattering control efforts in the SMC system. In the FNNISMC strategy, the FNN framework is designed to mimic the SMC law, and adaptive tuning algorithms for network parameters are derived in the sense of projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator actuated by DC servo motors are provided to justify the claims of the proposed FNNISMC system, and the superiority of the proposed FNNISMC scheme is also evaluated by quantitative comparison with previous intelligent control schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization Performance of Fisher Linear Discriminant Based on Markov Sampling

    Publication Year: 2013 , Page(s): 288 - 300
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB) |  | HTML iconHTML  

    Fisher linear discriminant (FLD) is a well-known method for dimensionality reduction and classification that projects high-dimensional data onto a low-dimensional space where the data achieves maximum class separability. The previous works describing the generalization ability of FLD have usually been based on the assumption of independent and identically distributed (i.i.d.) samples. In this paper, we go far beyond this classical framework by studying the generalization ability of FLD based on Markov sampling. We first establish the bounds on the generalization performance of FLD based on uniformly ergodic Markov chain (u.e.M.c.) samples, and prove that FLD based on u.e.M.c. samples is consistent. By following the enlightening idea from Markov chain Monto Carlo methods, we also introduce a Markov sampling algorithm for FLD to generate u.e.M.c. samples from a given data of finite size. Through simulation studies and numerical studies on benchmark repository using FLD, we find that FLD based on u.e.M.c. samples generated by Markov sampling can provide smaller misclassification rates compared to i.i.d. samples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selective Positive–Negative Feedback Produces the Winner-Take-All Competition in Recurrent Neural Networks

    Publication Year: 2013 , Page(s): 301 - 309
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4919 KB) |  | HTML iconHTML  

    The winner-take-all (WTA) competition is widely observed in both inanimate and biological media and society. Many mathematical models are proposed to describe the phenomena discovered in different fields. These models are capable of demonstrating the WTA competition. However, they are often very complicated due to the compromise with experimental realities in the particular fields; it is often difficult to explain the underlying mechanism of such a competition from the perspective of feedback based on those sophisticate models. In this paper, we make steps in that direction and present a simple model, which produces the WTA competition by taking advantage of selective positive-negative feedback through the interaction of neurons via p-norm. Compared to existing models, this model has an explicit explanation of the competition mechanism. The ultimate convergence behavior of this model is proven analytically. The convergence rate is discussed and simulations are conducted in both static and dynamic competition scenarios. Both theoretical and numerical results validate the effectiveness of the dynamic equation in describing the nonlinear phenomena of WTA competition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification and Prediction of Dynamic Systems Using an Interactively Recurrent Self-Evolving Fuzzy Neural Network

    Publication Year: 2013 , Page(s): 310 - 321
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (808 KB) |  | HTML iconHTML  

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Discrete-Time Recurrent Neural Network Proposal for Quadratic Optimization With General Linear Constraints

    Publication Year: 2013 , Page(s): 322 - 328
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    In this brief, the quadratic problem with general linear constraints is reformulated using the Wolfe dual theory, and a very simple discrete-time recurrent neural network is proved to be able to solve it. Conditions that guarantee global convergence of this network to the constrained minimum are developed. The computational complexity of the method is analyzed, and experimental work is presented that shows its high efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Incorporating Mean Template Into Finite Mixture Model for Image Segmentation

    Publication Year: 2013 , Page(s): 328 - 335
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (715 KB) |  | HTML iconHTML  

    The well-known finite mixture model (FMM) has been regarded as a useful tool for image segmentation application. However, the pixels in FMM are considered independent of each other and the spatial relationship between neighboring pixels is not taken into account. These limitations make the FMM more sensitive to noise. In this brief, we propose a simple and effective method to make the traditional FMM more robust to noise with the help of a mean template. FMM can be considered a linear combination of prior and conditional probability from the expression of its mathematical formula. We calculate these probabilities with two mean templates: a weighted arithmetic mean template and a weighted geometric mean template. Thus, in our model, the prior probability (or conditional probability) of an image pixel is influenced by the probabilities of pixels in its immediate neighborhood to incorporate the local spatial and intensity information for eliminating the noise. Finally, our algorithm is general enough and can be extended to any other FMM-based models to achieve super performance. Experimental results demonstrate the improved robustness and effectiveness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hyperbolic Hopfield Neural Networks

    Publication Year: 2013 , Page(s): 335 - 341
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2014 IEEE World Congress on Computational Intelligence

    Publication Year: 2013 , Page(s): 342
    Save to Project icon | Request Permissions | PDF file iconPDF (3429 KB)  
    Freely Available from IEEE
  • Open Access

    Publication Year: 2013 , Page(s): 343
    Save to Project icon | Request Permissions | PDF file iconPDF (1156 KB)  
    Freely Available from IEEE
  • IEEE Xplore Digital Library

    Publication Year: 2013 , Page(s): 344
    Save to Project icon | Request Permissions | PDF file iconPDF (1372 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2013 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks information for authors

    Publication Year: 2013 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Derong Liu
Institute of Automation
Chinese Academy of Sciences