# IEEE Transactions on Neural Networks and Learning Systems

## Filter Results

Displaying Results 1 - 21 of 21

Publication Year: 2012, Page(s): C1
| PDF (118 KB)
• ### IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information

Publication Year: 2012, Page(s): C2
| PDF (138 KB)
• ### Unsupervised Learning by Minimal Entropy Encoding

Publication Year: 2012, Page(s):1849 - 1861
Cited by:  Papers (4)
| | PDF (638 KB) | HTML

Following basic principles of information-theoretic learning, in this paper, we propose a novel approach to data clustering, referred to as minimal entropy encoding (MEE), which is based on a set of functions (features) projecting each input onto a minimum entropy configuration (code). Inspired by traditional parsimony principles, we seek solutions in reproducing kernel Hilbert spaces and then we ... View full abstract»

• ### Nonparametric Mixtures of Gaussian Processes With Power-Law Behavior

Publication Year: 2012, Page(s):1862 - 1871
Cited by:  Papers (9)
| | PDF (772 KB) | HTML

Gaussian processes (GPs) constitute one of the most important Bayesian machine learning approaches, based on a particularly effective method for placing a prior distribution over the space of regression functions. Several researchers have considered postulating mixtures of GPs as a means of dealing with nonstationary covariance functions, discontinuities, multimodality, and overlapping output sign... View full abstract»

• ### Generalization Bounds of ERM-Based Learning Processes for Continuous-Time Markov Chains

Publication Year: 2012, Page(s):1872 - 1883
Cited by:  Papers (11)
| | PDF (407 KB) | HTML

Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dep... View full abstract»

• ### Neural Network Based Online Simultaneous Policy Update Algorithm for Solving the HJI Equation in Nonlinear $H_{infty}$ Control

Publication Year: 2012, Page(s):1884 - 1895
Cited by:  Papers (56)
| | PDF (447 KB) | HTML

It is well known that the nonlinear H∞ state feedback control problem relies on the solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial differential equation that has proven to be impossible to solve analytically. In this paper, a neural network (NN)-based online simultaneous policy update algorithm (SPUA) is developed to solve the HJI equat... View full abstract»

• ### Cost-Sensitive Sequences of Bregman Divergences

Publication Year: 2012, Page(s):1896 - 1904
| | PDF (316 KB) | HTML

The minimization of the empirical risk based on an arbitrary Bregman divergence is known to provide posterior class probability estimates in classification problems, but the accuracy of the estimate for a given value of the true posterior depends on the specific choice of the divergence. Ad hoc Bregman divergences can be designed to get a higher estimation accuracy for the posterior probability va... View full abstract»

• ### Noise-Tuning-Based Hysteretic Noisy Chaotic Neural Network for Broadcast Scheduling Problem in Wireless Multihop Networks

Publication Year: 2012, Page(s):1905 - 1918
Cited by:  Papers (8)
| | PDF (2628 KB) | HTML

Compared with noisy chaotic neural networks (NCNNs), hysteretic noisy chaotic neural networks (HNCNNs) are more likely to exhibit better optimization performance at higher noise levels, but behave worse at lower noise levels. In order to improve the optimization performance of HNCNNs, this paper presents a novel noise-tuning-based hysteretic noisy chaotic neural network (NHNCNN). Using a noise tun... View full abstract»

• ### Exponential Stabilization of Memristive Neural Networks With Time Delays

Publication Year: 2012, Page(s):1919 - 1929
Cited by:  Papers (106)
| | PDF (608 KB) | HTML

In this paper, a general class of memristive neural networks with time delays is formulated and studied. Some sufficient conditions in terms of linear matrix inequalities are obtained, in order to achieve exponential stabilization. The result can be applied to the closed-loop control of memristive systems. In particular, several succinct criteria are given to ascertain the exponential stabilizatio... View full abstract»

• ### Descent Algorithms on Oblique Manifold for Source-Adaptive ICA Contrast

Publication Year: 2012, Page(s):1930 - 1947
Cited by:  Papers (9)
| | PDF (1571 KB) | HTML

A Riemannian manifold optimization strategy is proposed to facilitate the relaxation of the orthonormality constraint in a more natural way in the course of performing independent component analysis (ICA) that employs a mutual information-based source-adaptive contrast function. Despite the extensive development of manifold techniques catering to the orthonormality constraint, only a limited numbe... View full abstract»

• ### Sparse Approximation to the Eigensubspace for Discrimination

Publication Year: 2012, Page(s):1948 - 1960
Cited by:  Papers (41)
| | PDF (748 KB) | HTML

Two-dimensional (2-D) image-matrix-based projection methods for feature extraction are widely used in many fields of computer vision and pattern recognition. In this paper, we propose a novel framework called sparse 2-D projections (S2DP) for image feature extraction. Different from the existing 2-D feature extraction methods, S2DP iteratively learns the sparse projection matrix by using elastic n... View full abstract»

• ### Adaptive Subset Kernel Principal Component Analysis for Time-Varying Patterns

Publication Year: 2012, Page(s):1961 - 1973
Cited by:  Papers (3)
| | PDF (2471 KB) | HTML

Kernel principal component analysis (KPCA) and its online learning algorithms have been proposed and widely used. Since KPCA uses training samples for bases of the operator, its online learning algorithms require the preparation of all training samples beforehand. Subset KPCA (SubKPCA), which uses a subset of samples for the basis set, has been proposed and has demonstrated better performance with... View full abstract»

• ### Robust Single-Hidden Layer Feedforward Network-Based Pattern Classifier

Publication Year: 2012, Page(s):1974 - 1986
Cited by:  Papers (15)
| | PDF (1290 KB) | HTML

In this paper, a new robust single-hidden layer feedforward network (SLFN)-based pattern classifier is developed. It is shown that the frequency spectrums of the desired feature vectors can be specified in terms of the discrete Fourier transform (DFT) technique. The input weights of the SLFN are then optimized with the regularization theory such that the error between the frequency components of t... View full abstract»

• ### Compositional Generative Mapping for Tree-Structured Data—Part I: Bottom-Up Probabilistic Modeling of Trees

Publication Year: 2012, Page(s):1987 - 2002
Cited by:  Papers (8)
| | PDF (806 KB) | HTML

We introduce a novel compositional (recursive) probabilistic model for trees that defines an approximated bottom-up generative process from the leaves to the root of a tree. The proposed model defines contextual state transitions from the joint configuration of the children to the parent nodes. We argue that the bottom-up context postulates different probabilistic assumptions with respect to a top... View full abstract»

• ### Real AdaBoost With Gate Controlled Fusion

Publication Year: 2012, Page(s):2003 - 2009
Cited by:  Papers (7)
| | PDF (442 KB) | HTML

In this brief, we propose to increase the capabilities of standard real AdaBoost (RAB) architectures by replacing their linear combinations with a fusion controlled by a gate with fixed kernels. Experimental results in a series of well-known benchmark problems support the effectiveness of this approach in improving classification performance. Although the need for cross-validation processes obviou... View full abstract»

• ### IIJCNN-Dallas, TX

Publication Year: 2012, Page(s): 2010
| PDF (2417 KB)
• ### 2014 IEEE World Congress on Computational Intelligence

Publication Year: 2012, Page(s): 2011
| PDF (3429 KB)

Publication Year: 2012, Page(s): 2012
| PDF (1156 KB)
• ### 2012 Index IEEE Transactions on Neural Networks and Learning Systems Vol. 23

Publication Year: 2012, Page(s):2013 - 2037
| PDF (520 KB)
• ### IEEE Computational Intelligence Society Information

Publication Year: 2012, Page(s): C3
| PDF (39 KB)
• ### IEEE Transactions on Neural Networks information for authors

Publication Year: 2012, Page(s): C4
| PDF (39 KB)

## Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

## Meet Our Editors

Editor-in-Chief
Haibo He
Dept. of Electrical, Computer, and Biomedical Engineering
University of Rhode Island
Kingston, RI 02881, USA
ieeetnnls@gmail.com