# IEEE Transactions on Neural Networks and Learning Systems

## Filter Results

Displaying Results 1 - 24 of 24

Publication Year: 2014, Page(s): C1
| PDF (121 KB)
• ### IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information

Publication Year: 2014, Page(s): C2
| PDF (140 KB)
• ### A New Learning Algorithm for a Fully Connected Neuro-Fuzzy Inference System

Publication Year: 2014, Page(s):1741 - 1757
Cited by:  Papers (11)
| | PDF (3801 KB) | HTML

A traditional neuro-fuzzy system is transformed into an equivalent fully connected three layer neural network (NN), namely, the fully connected neuro-fuzzy inference systems (F-CONFIS). The F-CONFIS differs from traditional NNs by its dependent and repeated weights between input and hidden layers and can be considered as the variation of a kind of multilayer NN. Therefore, an efficient learning al... View full abstract»

• ### Synchronization of Stochastic Dynamical Networks Under Impulsive Control With Time Delays

Publication Year: 2014, Page(s):1758 - 1768
Cited by:  Papers (43)
| | PDF (4286 KB) | HTML

In this paper, the stochastic synchronization problem is studied for a class of delayed dynamical networks under delayed impulsive control. Different from the existing results on the synchronization of dynamical networks under impulsive control, impulsive input delays are considered in our model. By assuming that the impulsive intervals belong to a certain interval and using the mathematical induc... View full abstract»

• ### Stochastic Learning via Optimizing the Variational Inequalities

Publication Year: 2014, Page(s):1769 - 1778
Cited by:  Papers (3)
| | PDF (1365 KB) | HTML

A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the le... View full abstract»

• ### Sparse Alignment for Robust Tensor Learning

Publication Year: 2014, Page(s):1779 - 1792
Cited by:  Papers (37)
| | PDF (3339 KB) | HTML

Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based... View full abstract»

• ### An Incremental Design of Radial Basis Function Networks

Publication Year: 2014, Page(s):1793 - 1803
Cited by:  Papers (31)
| | PDF (4746 KB) | HTML

This paper proposes an offline algorithm for incrementally constructing and training radial basis function (RBF) networks. In each iteration of the error correction (ErrCor) algorithm, one RBF unit is added to fit and then eliminate the highest peak (or lowest valley) in the error surface. This process is repeated until a desired error level is reached. Experimental results on real world data sets... View full abstract»

• ### Pinning Distributed Synchronization of Stochastic Dynamical Networks: A Mixed Optimization Approach

Publication Year: 2014, Page(s):1804 - 1815
Cited by:  Papers (47)
| | PDF (2455 KB) | HTML

This paper is concerned with the problem of pinning synchronization of nonlinear dynamical networks with multiple stochastic disturbances. Two kinds of pinning schemes are considered: 1) pinned nodes are fixed along the time evolution and 2) pinned nodes are switched from time to time according to a set of Bernoulli stochastic variables. Using Lyapunov function methods and stochastic analysis tech... View full abstract»

• ### Deep Networks are Effective Encoders of Periodicity

Publication Year: 2014, Page(s):1816 - 1827
Cited by:  Papers (14)
| | PDF (930 KB) | HTML

We present a comparative theoretical analysis of representation in artificial neural networks with two extreme architectures, a shallow wide network and a deep narrow network, devised to maximally decouple their representative power due to layer width and network depth. We show that, given a specific activation function, models with comparable VC-dimension are required to guarantee zero error mode... View full abstract»

• ### Parsimonious Extreme Learning Machine Using Recursive Orthogonal Least Squares

Publication Year: 2014, Page(s):1828 - 1841
Cited by:  Papers (53)
| | PDF (3937 KB) | HTML

Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares proced... View full abstract»

• ### LI-MLC: A Label Inference Methodology for Addressing High Dimensionality in the Label Space for Multilabel Classification

Publication Year: 2014, Page(s):1842 - 1854
Cited by:  Papers (7)
| | PDF (2610 KB) | HTML

Multilabel classification (MLC) has generated considerable research interest in recent years, as a technique that can be applied to many real-world scenarios. To process them with binary or multiclass classifiers, methods for transforming multilabel data sets (MLDs) have been proposed, as well as adapted algorithms able to work with this type of data sets. However, until now, few studies have addr... View full abstract»

• ### A Fast Algorithm for Nonnegative Matrix Factorization and Its Convergence

Publication Year: 2014, Page(s):1855 - 1863
Cited by:  Papers (7)
| | PDF (983 KB) | HTML

Nonnegative matrix factorization (NMF) has recently become a very popular unsupervised learning method because of its representational properties of factors and simple multiplicative update algorithms for solving the NMF. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, the convergence of multiplicative update algorithms has not been we... View full abstract»

• ### Memristor Crossbar-Based Neuromorphic Computing System: A Case Study

Publication Year: 2014, Page(s):1864 - 1878
Cited by:  Papers (64)
| | PDF (2566 KB) | HTML

By mimicking the highly parallel biological systems, neuromorphic hardware provides the capability of information processing within a compact and energy-efficient platform. However, traditional Von Neumann architecture and the limited signal connections have severely constrained the scalability and performance of such hardware implementations. Recently, many research efforts have been investigated... View full abstract»

• ### Multiobjective Optimization for Model Selection in Kernel Methods in Regression

Publication Year: 2014, Page(s):1879 - 1893
Cited by:  Papers (4)
| | PDF (2068 KB) | HTML

Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-va... View full abstract»

• ### Separation of Synchronous Sources Through Phase Locked Matrix Factorization

Publication Year: 2014, Page(s):1894 - 1908
| | PDF (2624 KB) | HTML

In this paper, we study the separation of synchronous sources (SSS) problem, which deals with the separation of sources whose phases are synchronous. This problem cannot be addressed through independent component analysis methods because synchronous sources are statistically dependent. We present a two-step algorithm, called phase locked matrix factorization (PLMF), to perform SSS. We also show th... View full abstract»

• ### Clipping in Neurocontrol by Adaptive Dynamic Programming

Publication Year: 2014, Page(s):1909 - 1920
Cited by:  Papers (4)
| | PDF (1256 KB) | HTML

In adaptive dynamic programming, neurocontrol, and reinforcement learning, the objective is for an agent to learn to choose actions so as to minimize a total cost function. In this paper, we show that when discretized time is used to model the motion of the agent, it can be very important to do clipping on the motion of the agent in the final time step of the trajectory. By clipping, we mean that ... View full abstract»

• ### Consensus Acceleration in a Class of Predictive Networks

Publication Year: 2014, Page(s):1921 - 1927
Cited by:  Papers (22)
| | PDF (423 KB) | HTML

A fastest consensus problem of topology fixed networks has been formulated as an optimal linear iteration problem and efficiently solved in the literature. Considering a kind of predictive mechanism, we show that the consensus evolution can be further accelerated while physically maintaining the network topology. The underlying mechanism is that an effective prediction is able to induce a network ... View full abstract»

• ### ${rm H}_{infty}$ Output Tracking Control of Discrete-Time Nonlinear Systems via Standard Neural Network Models

Publication Year: 2014, Page(s):1928 - 1935
Cited by:  Papers (6)
| | PDF (424 KB) | HTML

This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedba... View full abstract»

• ### Extended Dissipative Analysis for Neural Networks With Time-Varying Delays

Publication Year: 2014, Page(s):1936 - 1941
Cited by:  Papers (60)
| | PDF (498 KB) | HTML

In this brief, an extended dissipativity analysis was conducted for a neural network with time-varying delays. The concept of the extended dissipativity can be used to solve for the H∞, L2 - L∞, passive, and dissipative performance by adjusting the weighting matrices in a new performance index. In addition, the activation function dividing method is m... View full abstract»

• ### Multilinear Sparse Principal Component Analysis

Publication Year: 2014, Page(s):1942 - 1950
Cited by:  Papers (59)
| | PDF (1281 KB) | HTML

In this brief, multilinear sparse principal component analysis (MSPCA) is proposed for feature extraction from the tensor data. MSPCA can be viewed as a further extension of the classical principal component analysis (PCA), sparse PCA (SPCA) and the recently proposed multilinear PCA (MPCA). The key operation of MSPCA is to rewrite the MPCA into multilinear regression forms and relax it for sparse ... View full abstract»

• ### IJCNN2015 Killarney, Ireland

Publication Year: 2014, Page(s): 1951
| PDF (1426 KB)

Publication Year: 2014, Page(s): 1952
| PDF (1638 KB)
• ### IEEE Computational Intelligence Society Information

Publication Year: 2014, Page(s): C3
| PDF (125 KB)
• ### IEEE Transactions on Neural Networks information for authors

Publication Year: 2014, Page(s): C4
| PDF (128 KB)

## Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

## Meet Our Editors

Editor-in-Chief
Haibo He
Dept. of Electrical, Computer, and Biomedical Engineering
University of Rhode Island
Kingston, RI 02881, USA
ieeetnnls@gmail.com