IEEE Transactions on Neural Networks and Learning Systems

Issue 5 • May 2016

Filter Results

Displaying Results 1 - 22 of 22
  • Table of contents

    Publication Year: 2016, Page(s): C1
    Request permission for commercial reuse | PDF file iconPDF (117 KB)
    Freely Available from IEEE
  • IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information

    Publication Year: 2016, Page(s): C2
    Request permission for commercial reuse | PDF file iconPDF (88 KB)
    Freely Available from IEEE
  • Storing Sequences in Binary Tournament-Based Neural Networks

    Publication Year: 2016, Page(s):913 - 925
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1829 KB) | HTML iconHTML

    An extension to a recently introduced architecture of clique-based neural networks is presented. This extension makes it possible to store sequences with high efficiency. To obtain this property, network connections are provided with orientation and with flexible redundancy carried by both spatial and temporal redundancies, a mechanism of anticipation being introduced in the model. In addition to ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data Generators for Learning Systems Based on RBF Networks

    Publication Year: 2016, Page(s):926 - 938
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2269 KB) | HTML iconHTML

    There are plenty of problems where the data available is scarce and expensive. We propose a generator of semiartificial data with similar properties to the original data, which enables the development and testing of different data mining algorithms and the optimization of their parameters. The generated data allow large-scale experimentation and simulations without danger of overfitting. The propo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Framework for Learning Geometry-Aware Kernels

    Publication Year: 2016, Page(s):939 - 951
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2666 KB) | HTML iconHTML

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels1 to exploit the manifold structure of the data. S... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Sampling-Based Clustering Ensemble With Global and Local Constitutions

    Publication Year: 2016, Page(s):952 - 965
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (4879 KB) | HTML iconHTML

    Among a number of ensemble learning techniques, boosting and bagging are the most popular sampling-based ensemble approaches for classification problems. Boosting is considered stronger than bagging on noise-free data set with complex class structures, whereas bagging is more robust than boosting in cases where noise data are present. In this paper, we extend both ensemble approaches to clustering... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Proximal Trajectory Algorithm in SVM Cross Validation

    Publication Year: 2016, Page(s):966 - 977
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1908 KB) | HTML iconHTML

    We propose a bilevel cross-validation scheme for support vector machine (SVM) model selection based on the construction of the entire regularization path. Since such path is a particular case of the more general proximal trajectory concept from nonsmooth optimization, we propose for its construction an algorithm based on solving a finite number of structured linear programs. Our methodology, diffe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MLPNN Training via a Multiobjective Optimization of Training Error and Stochastic Sensitivity

    Publication Year: 2016, Page(s):978 - 992
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2880 KB) | HTML iconHTML

    The training of a multilayer perceptron neural network (MLPNN) concerns the selection of its architecture and the connection weights via the minimization of both the training error and a penalty term. Different penalty terms have been proposed to control the smoothness of the MLPNN for better generalization capability. However, controlling its smoothness using, for instance, the norm of weights or... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization Performance of Regularized Ranking With Multiscale Kernels

    Publication Year: 2016, Page(s):993 - 1002
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (991 KB) | HTML iconHTML

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Maximum Margin Approach for Semisupervised Ordinal Regression Clustering

    Publication Year: 2016, Page(s):1003 - 1019
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2215 KB) | HTML iconHTML

    Ordinal regression (OR) is generally defined as the task where the input samples are ranked on an ordinal scale. OR has found a wide variety of applications, and a great deal of work has been done on it. However, most of the existing work focuses on supervised/semisupervised OR classification, and the semisupervised OR clustering problems have not been explicitly addressed. In real-world OR applic... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Discriminative Stein Kernel for SPD Matrices and Its Applications

    Publication Year: 2016, Page(s):1020 - 1033
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2496 KB) | HTML iconHTML Multimedia Media

    Stein kernel (SK) has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: 1) eigenvalue estimation becomes biased when the number of samples is inadequate, w... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic Slow Features for Behavior Analysis

    Publication Year: 2016, Page(s):1034 - 1048
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (3940 KB) | HTML iconHTML

    A recently introduced latent feature learning technique for time-varying dynamic phenomena analysis is the so-called slow feature analysis (SFA). SFA is a deterministic component analysis technique for multidimensional sequences that, by minimizing the variance of the first-order time derivative approximation of the latent variables, finds uncorrelated projections that extract slowly varying featu... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improper Complex-Valued Bhattacharyya Distance

    Publication Year: 2016, Page(s):1049 - 1064
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1341 KB) | HTML iconHTML

    Motivated by application of complex-valued signal processing techniques in statistical pattern recognition, classification, and Gaussian mixture (GM) modeling, this paper derives analytical expressions for computing the Bhattacharyya coefficient/distance (BC/BD) between two improper complex-valued Gaussian distributions. The BC/BD is one of the most widely used statistical measures for evaluating ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Distance Metric for Unsupervised Learning of Categorical Data

    Publication Year: 2016, Page(s):1065 - 1079
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2421 KB) | HTML iconHTML

    Distance metric is the basis of many learning algorithms, and its effectiveness usually has a significant influence on the learning results. In general, measuring distance for numerical data is a tractable task, but it could be a nontrivial problem for categorical data sets. This paper, therefore, presents a new distance metric for categorical data based on the characteristics of categorical value... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated Low-Rank-Based Discriminative Feature Learning for Recognition

    Publication Year: 2016, Page(s):1080 - 1093
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (3435 KB) | HTML iconHTML Multimedia Media

    Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Nearest Neighbor Classifier Employing Critical Boundary Vectors for Efficient On-Chip Template Reduction

    Publication Year: 2016, Page(s):1094 - 1107
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (6505 KB) | HTML iconHTML

    Aiming at efficient data condensation and improving accuracy, this paper presents a hardware-friendly template reduction (TR) method for the nearest neighbor (NN) classifiers by introducing the concept of critical boundary vectors. A hardware system is also implemented to demonstrate the feasibility of using an field-programmable gate array (FPGA) to accelerate the proposed method. Initially, k-me... View full abstract»

    Open Access
  • Tree Ensembles on the Induced Discrete Space

    Publication Year: 2016, Page(s):1108 - 1113
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1916 KB) | HTML iconHTML

    Decision trees are widely used predictive models in machine learning. Recently, K-tree is proposed, where the original discrete feature space is expanded by generating all orderings of values of k discrete attributes and these orderings are used as the new attributes in decision tree induction. Although K-tree performs significantly better than the proper one, their exponential time complexity can... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE World Congress on Computational Intelligence (WCCI)

    Publication Year: 2016, Page(s): 1114
    Request permission for commercial reuse | PDF file iconPDF (810 KB)
    Freely Available from IEEE
  • 2016 IEEE Symposium Series on Computational Intelligence

    Publication Year: 2016, Page(s): 1115
    Request permission for commercial reuse | PDF file iconPDF (992 KB)
    Freely Available from IEEE
  • Call for nominations for the next Editor-in-Chief of IEEE Transactions on Fuzzy Systems

    Publication Year: 2016, Page(s): 1116
    Request permission for commercial reuse | PDF file iconPDF (522 KB)
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2016, Page(s): C3
    Request permission for commercial reuse | PDF file iconPDF (159 KB)
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks information for authors

    Publication Year: 2016, Page(s): C4
    Request permission for commercial reuse | PDF file iconPDF (109 KB)
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Haibo He
Dept. of Electrical, Computer, and Biomedical Engineering
University of Rhode Island
Kingston, RI 02881, USA
ieeetnnls@gmail.com