# IEEE Transactions on Neural Networks and Learning Systems

## Volume 29 Issue 3 • March 2018

Purchase of this issue is not available.

## Filter Results

Displaying Results 1 - 25 of 29

Publication Year: 2018, Page(s):C1 - 509
| PDF (119 KB)
• ### IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information

Publication Year: 2018, Page(s): C2
| PDF (66 KB)
• ### Robust C-Loss Kernel Classifiers

Publication Year: 2018, Page(s):510 - 522
Cited by:  Papers (1)
| | PDF (3033 KB) | HTML

The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel clas... View full abstract»

• ### Synchronization of General Chaotic Neural Networks With Nonuniform Sampling and Packet Missing: A Switched System Approach

Publication Year: 2018, Page(s):523 - 533
Cited by:  Papers (6)
| | PDF (1126 KB) | HTML

This paper is concerned with the exponential synchronization issue of general chaotic neural networks subject to nonuniform sampling and control packet missing in the frame of the zero-input strategy. Based on this strategy, we make use of the switched system model to describe the synchronization error system. First, when the missing of control packet does not occur, an exponential stability crite... View full abstract»

• ### A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization

Publication Year: 2018, Page(s):534 - 544
Cited by:  Papers (3)
| | PDF (2142 KB) | HTML

In this paper, based on CR calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimiza... View full abstract»

• ### A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images

Publication Year: 2018, Page(s):545 - 559
Cited by:  Papers (12)
| | PDF (7044 KB) | HTML

We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. Th... View full abstract»

• ### Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis

Publication Year: 2018, Page(s):560 - 572
Cited by:  Papers (4)
| | PDF (4109 KB) | HTML

Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal comp... View full abstract»

• ### Stabilization of Neural-Network-Based Control Systems via Event-Triggered Control With Nonperiodic Sampled Data

Publication Year: 2018, Page(s):573 - 585
Cited by:  Papers (5)
| | PDF (1793 KB) | HTML

This paper focuses on a problem of event-triggered stabilization for a class of nonuniformly sampled neural-network-based control systems (NNBCSs). First, a new event-triggered data transmission mechanism is designed based on the nonperiodic sampled data. Different from the previous works, the proposed triggering scheme enables the NNBCSs design to enjoy the advantages of both nonuniform and event... View full abstract»

• ### Optimal Switching of DC–DC Power Converters Using Approximate Dynamic Programming

Publication Year: 2018, Page(s):586 - 596
Cited by:  Papers (1)
| | PDF (1962 KB) | HTML

Optimal switching between different topologies in step-down dc-dc voltage converters, with nonideal inductors and capacitors, is investigated in this paper. Challenges including constraint on the inductor current and voltage leakages across the capacitor (due to switching) are incorporated. The objective is generating the desired voltage with low ripples and high robustness toward line and load di... View full abstract»

• ### Global Asymptotic Stability and Stabilization of Neural Networks With General Noise

Publication Year: 2018, Page(s):597 - 607
| | PDF (2037 KB) | HTML

Neural networks (NNs) in the stochastic environment were widely modeled as stochastic differential equations, which were driven by white noise, such as Brown or Wiener process in the existing papers. However, they are not necessarily the best models to describe dynamic characters of NNs disturbed by nonwhite noise in some specific situations. In this paper, general noise disturbance, which may be ... View full abstract»

• ### Supervised Discrete Hashing With Relaxation

Publication Year: 2018, Page(s):608 - 617
Cited by:  Papers (6)
| | PDF (2729 KB) | HTML

Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called “supervised discrete hashing with relaxation” (SDHR) based on “supervised discrete hashing” (SDH). SDH uses ordinary least squares regression and t... View full abstract»

• ### Dissipativity Analysis for Stochastic Memristive Neural Networks With Time-Varying Delays: A Discrete-Time Case

Publication Year: 2018, Page(s):618 - 630
Cited by:  Papers (4)
| | PDF (1190 KB) | HTML

In this paper, the dissipativity problem of discretetime memristive neural networks (DMNNs) with time-varying delays and stochastic perturbation is investigated. A class of logical switched functions are put forward to reflect the memristor-based switched property of connection weights, and the DMNNs are then recast into a tractable model. Based on the tractable model, the robust analysis method a... View full abstract»

• ### Adaptive Reliable $H_\infty$ Static Output Feedback Control Against Markovian Jumping Sensor Failures

Publication Year: 2018, Page(s):631 - 644
Cited by:  Papers (3)
| | PDF (2808 KB) | HTML

This paper investigates the adaptive H∞static output feedback (SOF) control problem for continuous-time linear systems with stochastic sensor failures. A multi-Markovian variable is introduced to denote the failure scaling factors for each sensor. Different from the existing results, the failure parameters are stochastically jumping and their bounds of are unknown. An adaptive reliable ... View full abstract»

• ### Self-Taught Low-Rank Coding for Visual Learning

Publication Year: 2018, Page(s):645 - 656
Cited by:  Papers (5)
| | PDF (2749 KB) | HTML

The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choi... View full abstract»

• ### Multiview Boosting With Information Propagation for Classification

Publication Year: 2018, Page(s):657 - 669
| | PDF (1352 KB) | HTML

Multiview learning has shown promising potential in many applications. However, most techniques are focused on either view consistency, or view diversity. In this paper, we introduce a novel multiview boosting algorithm, called Boost.SH, that computes weak classifiers independently of each view but uses a shared weight distribution to propagate information among the multiple views to ensure consis... View full abstract»

• ### Probabilistic Low-Rank Multitask Learning

Publication Year: 2018, Page(s):670 - 680
Cited by:  Papers (2)
| | PDF (2145 KB) | HTML

In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multit... View full abstract»

• ### Experienced Gray Wolf Optimization Through Reinforcement Learning and Neural Networks

Publication Year: 2018, Page(s):681 - 694
Cited by:  Papers (1)
| | PDF (2746 KB) | HTML

In this paper, a variant of gray wolf optimization (GWO) that uses reinforcement learning principles combined with neural networks to enhance the performance is proposed. The aim is to overcome, by reinforced learning, the common challenge of setting the right parameters for the algorithm. In GWO, a single parameter is used to control the exploration/exploitation rate, which influences the perform... View full abstract»

• ### Cooperative Adaptive Output Regulation for Second-Order Nonlinear Multiagent Systems With Jointly Connected Switching Networks

Publication Year: 2018, Page(s):695 - 705
Cited by:  Papers (5)
| | PDF (1382 KB) | HTML

This paper studies the cooperative global robust output regulation problem for a class of heterogeneous second-order nonlinear uncertain multiagent systems with jointly connected switching networks. The main contributions consist of the following three aspects. First, we generalize the result of the adaptive distributed observer from undirected jointly connected switching networks to directed join... View full abstract»

• ### Determination of the Edge of Criticality in Echo State Networks Through Fisher Information Maximization

Publication Year: 2018, Page(s):706 - 717
Cited by:  Papers (3)
| | PDF (2582 KB) | HTML

It is a widely accepted fact that the computational capability of recurrent neural networks (RNNs) is maximized on the so-called “edge of criticality.” Once the network operates in this configuration, it performs efficiently on a specific application both in terms of: 1) low prediction error and 2) high short-term memory capacity. Since the behavior of recurrent networks is strongly influenced by ... View full abstract»

• ### Identifying Objective and Subjective Words via Topic Modeling

Publication Year: 2018, Page(s):718 - 730
| | PDF (4250 KB) | HTML

It is observed that distinct words in a given document have either strong or weak ability in delivering facts (i.e., the objective sense) or expressing opinions (i.e., the subjective sense) depending on the topics they associate with. Motivated by the intuitive assumption that different words have varying degree of discriminative power in delivering the objective sense or the subjective sense with... View full abstract»

• ### Insights Into the Robustness of Minimum Error Entropy Estimation

Publication Year: 2018, Page(s):731 - 737
Cited by:  Papers (2)
| | PDF (1112 KB) | HTML

The minimum error entropy (MEE) is an important and highly effective optimization criterion in information theoretic learning (ITL). For regression problems, MEE aims at minimizing the entropy of the prediction error such that the estimated model preserves the information of the data generating system as much as possible. In many real world applications, the MEE estimator can outperform significan... View full abstract»

• ### Robust DLPP With Nongreedy $\ell _1$ -Norm Minimization and Maximization

Publication Year: 2018, Page(s):738 - 743
Cited by:  Papers (1)
| | PDF (847 KB) | HTML

Recently, discriminant locality preserving projection based on L1-norm (DLPP-L1) was developed for robust subspace learning and image classification. It obtains projection vectors by greedy strategy, i.e., all projection vectors are optimized individually through maximizing the objective function. Thus, the obtained solution does not necessarily best optimize the corresponding trace ratio optimiza... View full abstract»

• ### Stability of Rotor Hopfield Neural Networks With Synchronous Mode

Publication Year: 2018, Page(s):744 - 748
Cited by:  Papers (1)
| | PDF (465 KB) | HTML

A complex-valued Hopfield neural network (CHNN) is a model of a Hopfield neural network using multistate neurons. The stability conditions of CHNNs have been widely studied. A CHNN with a synchronous mode will converge to a fixed point or a cycle of length 2. A rotor Hopfield neural network (RHNN) is also a model of a multistate Hopfield neural network. RHNNs have much higher storage capacity and ... View full abstract»

• ### Terminal Sliding Mode-Based Consensus Tracking Control for Networked Uncertain Mechanical Systems on Digraphs

Publication Year: 2018, Page(s):749 - 756
Cited by:  Papers (2)
| | PDF (838 KB) | HTML

This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A ... View full abstract»

• ### Kernel-Based Multilayer Extreme Learning Machines for Representation Learning

Publication Year: 2018, Page(s):757 - 762
Cited by:  Papers (3)
| | PDF (1152 KB) | HTML

Recently, multilayer extreme learning machine (ML-ELM) was applied to stacked autoencoder (SAE) for representation learning. In contrast to traditional SAE, the training time of ML-ELM is significantly reduced from hours to seconds with high accuracy. However, ML-ELM suffers from several drawbacks: 1) manual tuning on the number of hidden nodes in every layer is an uncertain factor to training tim... View full abstract»

## Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

## Meet Our Editors

Editor-in-Chief
Haibo He
Dept. of Electrical, Computer, and Biomedical Engineering
University of Rhode Island
Kingston, RI 02881, USA
ieeetnnls@gmail.com