# IEEE Transactions on Neural Networks and Learning Systems

## Volume 28 Issue 11 • Nov. 2017

The purchase and pricing options for this item are unavailable. Select items are only available as part of a subscription package. You may try again later or contact us for more information.

## Filter Results

Displaying Results 1 - 25 of 37

Publication Year: 2017, Page(s):C1 - 2465
| PDF (116 KB)
• ### IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information

Publication Year: 2017, Page(s): C2
| PDF (68 KB)
• ### Randomized Prediction Games for Adversarial Machine Learning

Publication Year: 2017, Page(s):2466 - 2478
| | PDF (1368 KB) | HTML Media

In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hidi... View full abstract»

• ### A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay

Publication Year: 2017, Page(s):2479 - 2489
| | PDF (1427 KB) | HTML

In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time del... View full abstract»

• ### Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis

Publication Year: 2017, Page(s):2490 - 2502
Cited by:  Papers (2)
| | PDF (2926 KB) | HTML

In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the... View full abstract»

• ### Pair- ${v}$ -SVR: A Novel and Efficient Pairing nu-Support Vector Regression Algorithm

Publication Year: 2017, Page(s):2503 - 2515
| | PDF (1684 KB) | HTML

This paper proposes a novel and efficient pairing nu-support vector regression (pair-v-SVR) algorithm that combines successfully the superior advantages of twin support vector regression (TSVR) and classical ε-SVR algorithms. In spirit of TSVR, the proposed pair-v-SVR solves two quadratic programming problems (QPPs) of smaller size rather than a single larger QPP, and thus has faster learni... View full abstract»

• ### Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses

Publication Year: 2017, Page(s):2516 - 2527
Cited by:  Papers (2)
| | PDF (1418 KB) | HTML

In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and uns... View full abstract»

• ### Needs, Pains, and Motivations in Autonomous Agents

Publication Year: 2017, Page(s):2528 - 2540
Cited by:  Papers (2)
| | PDF (2484 KB) | HTML

This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is ... View full abstract»

• ### Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults

Publication Year: 2017, Page(s):2541 - 2554
Cited by:  Papers (16)
| | PDF (1608 KB) | HTML

The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable sta... View full abstract»

• ### Lower Bounds on the Proportion of Leaders Needed for Expected Consensus of 3-D Flocks

Publication Year: 2017, Page(s):2555 - 2565
| | PDF (1679 KB) | HTML

This paper considers the consensus behavior of a spatially distributed 3-D dynamical network composed of heterogeneous agents: leaders and followers, in which the leaders have the preferred information about the destination, while the followers do not have. All followers move in a 3-D Euclidean space with a given speed and with their headings updated according to the average velocity of the corres... View full abstract»

• ### Efficient Exact Inference With Loss Augmented Objective in Structured Learning

Publication Year: 2017, Page(s):2566 - 2579
| | PDF (1361 KB) | HTML

Structural support vector machine (SVM) is an elegant approach for building complex and accurate models with structured outputs. However, its applicability relies on the availability of efficient inference algorithms-the state-of-the-art training algorithms repeatedly perform inference to compute a subgradient or to find the most violating configuration. In this paper, we propose an exact inferenc... View full abstract»

• ### A Neurodynamic Optimization Approach to Bilevel Quadratic Programming

Publication Year: 2017, Page(s):2580 - 2591
Cited by:  Papers (1)
| | PDF (3385 KB) | HTML

This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recur... View full abstract»

• ### Global Sensitivity Estimates for Neural Network Classifiers

Publication Year: 2017, Page(s):2592 - 2604
| | PDF (1595 KB) | HTML

Artificial neural networks (ANNs) have traditionally been seen as black-box models, because, although they are able to find “hidden” relations between inputs and outputs with a high approximation capacity, their structure seldom provides any insights on the structure of the functions being approximated. Several research papers have tried to debunk the black-box nature of ANNs, since ... View full abstract»

• ### Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems

Publication Year: 2017, Page(s):2605 - 2613
Cited by:  Papers (1)
| | PDF (1586 KB) | HTML

In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in L[0,∞]. In our control design, the upper bound of the NN modeling er... View full abstract»

• ### Dealing With the Issues Crucially Related to the Functionality and Reliability of NN-Associated Control for Nonlinear Uncertain Systems

Publication Year: 2017, Page(s):2614 - 2625
Cited by:  Papers (1)
| | PDF (1392 KB) | HTML

The “universal” approximating/learning feature of neural network (NN), widely and extensively used for control design, is contingent upon some critical conditions, either of which, if not satisfied, would render such feature vanished. In this paper, we show that these conditions are literally linked with several fundamental issues that have been overlooked in most existing NN-based c... View full abstract»

• ### Sampled-Data Synchronization of Markovian Coupled Neural Networks With Mode Delays Based on Mode-Dependent LKF

Publication Year: 2017, Page(s):2626 - 2637
Cited by:  Papers (5)
| | PDF (1814 KB) | HTML

This paper investigates sampled-data synchronization problem of Markovian coupled neural networks with mode-dependent interval time-varying delays and aperiodic sampling intervals based on an enhanced input delay approach. A mode-dependent augmented Lyapunov-Krasovskii functional (LKF) is utilized, which makes the LKF matrices mode-dependent as much as possible. By applying an extended Jensen's in... View full abstract»

• ### Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control

Publication Year: 2017, Page(s):2638 - 2647
Cited by:  Papers (2)
| | PDF (1307 KB) | HTML

In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information o... View full abstract»

• ### Finite-Time Stabilization and Adaptive Control of Memristor-Based Delayed Neural Networks

Publication Year: 2017, Page(s):2648 - 2659
| | PDF (1264 KB) | HTML

Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed ... View full abstract»

• ### Evaluating the Visualization of What a Deep Neural Network Has Learned

Publication Year: 2017, Page(s):2660 - 2673
Cited by:  Papers (3)
| | PDF (3500 KB) | HTML

Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches... View full abstract»

• ### A New Discrete-Time Multi-Constrained $K$ -Winner-Take-All Recurrent Network and Its Application to Prioritized Scheduling

Publication Year: 2017, Page(s):2674 - 2685
| | PDF (2072 KB) | HTML

In this paper, we propose a novel discrete-time recurrent neural network aiming to resolve a new class of multi-constrained K-winner-take-all (K-WTA) problems. By facilitating specially designed asymmetric neuron weights, the proposed model is capable of operating in a fully parallel manner, thereby allowing true digital implementation. This paper also provides theorems that delineate the theoreti... View full abstract»

• ### Online Training of an Opto-Electronic Reservoir Computer Applied to Real-Time Channel Equalization

Publication Year: 2017, Page(s):2686 - 2698
Cited by:  Papers (1)
| | PDF (2078 KB) | HTML

Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. The performance of its analog implementation is comparable to other state-of-the-art algorithms for tasks such as speech recognition or chaotic time series prediction, but these are often constrained by the offline training methods commonly employed. Here, we investigated the online learning approach by ... View full abstract»

• ### Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion

Publication Year: 2017, Page(s):2699 - 2711
Cited by:  Papers (2)
| | PDF (1540 KB) | HTML

Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by e... View full abstract»

• ### Consensus of Multiagent Systems With Distance-Dependent Communication Networks

Publication Year: 2017, Page(s):2712 - 2726
Cited by:  Papers (1)
| | PDF (1470 KB) | HTML

In this paper, we study the consensus problem of discrete-time and continuous-time multiagent systems with distance-dependent communication networks, respectively. The communication weight between any two agents is assumed to be a nonincreasing function of their distance. First, we consider the networks with fixed connectivity. In this case, the interaction between adjacent agents always exists bu... View full abstract»

• ### A Unified Fisher’s Ratio Learning Method for Spatial Filter Optimization

Publication Year: 2017, Page(s):2727 - 2737
| | PDF (2617 KB) | HTML

To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the intercla... View full abstract»

• ### Regularized Class-Specific Subspace Classifier

Publication Year: 2017, Page(s):2738 - 2747
| | PDF (1455 KB) | HTML

In this paper, we mainly focus on how to achieve the translated subspace representation for each class, which could simultaneously indicate the distribution of the associated class and the differences from its complementary classes. By virtue of the reconstruction problem, the class-specific subspace classifier (CSSC) problem could be represented as a series of biobjective optimization problems, w... View full abstract»

## Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

## Meet Our Editors

Editor-in-Chief
Haibo He
Dept. of Electrical, Computer, and Biomedical Engineering
University of Rhode Island
Kingston, RI 02881, USA
ieeetnnls@gmail.com