# IEEE Transactions on Neural Networks and Learning Systems

## Filter Results

Displaying Results 1 - 19 of 19
• ### Table of contents

Publication Year: 2012, Page(s): C1
| PDF (118 KB)
• ### IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information

Publication Year: 2012, Page(s): C2
| PDF (40 KB)
• ### Generalization Characteristics of Complex-Valued Feedforward Neural Networks in Relation to Signal Coherence

Publication Year: 2012, Page(s):541 - 551
Cited by:  Papers (41)
| | PDF (1082 KB) | HTML

Applications of complex-valued neural networks (CVNNs) have expanded widely in recent years-in particular in radar and coherent imaging systems. In general, the most important merit of neural networks lies in their generalization ability. This paper compares the generalization characteristics of complex-valued and real-valued feedforward neural networks in terms of the coherence of the signals to ... View full abstract»

• ### Tackling Learning Intractability Through Topological Organization and Regulation of Cortical Networks

Publication Year: 2012, Page(s):552 - 564
Cited by:  Papers (10)
| | PDF (1972 KB) | HTML

A key challenge in evolving control systems for robots using neural networks is training tractability. Evolving monolithic fixed topology neural networks is shown to be intractable with limited supervision in high dimensional search spaces. Common strategies to overcome this limitation are to provide more supervision by encouraging particular solution strategies, manually decomposing the task and ... View full abstract»

• ### Neural Learning Circuits Utilizing Nano-Crystalline Silicon Transistors and Memristors

Publication Year: 2012, Page(s):565 - 573
Cited by:  Papers (66)
| | PDF (2637 KB) | HTML

Properties of neural circuits are demonstrated via SPICE simulations and their applications are discussed. The neuron and synapse subcircuits include ambipolar nano-crystalline silicon transistor and memristor device models based on measured data. Neuron circuit characteristics and the Hebbian synaptic learning rule are shown to be similar to biology. Changes in the average firing rate learning ru... View full abstract»

• ### Spiking Neural Network Model of Sound Localization Using the Interaural Intensity Difference

Publication Year: 2012, Page(s):574 - 586
Cited by:  Papers (13)  |  Patents (4)
| | PDF (1244 KB) | HTML

In this paper, a spiking neural network (SNN) architecture to simulate the sound localization ability of the mammalian auditory pathways using the interaural intensity difference cue is presented. The lateral superior olive was the inspiration for the architecture, which required the integration of an auditory periphery (cochlea) model and a model of the medial nucleus of the trapezoid body. The S... View full abstract»

• ### Minimum Data Requirement for Neural Networks Based on Power Spectral Density Analysis

Publication Year: 2012, Page(s):587 - 595
Cited by:  Papers (2)
| | PDF (908 KB) | HTML

One of the most critical challenges ahead for diesel engines is to identify new techniques for fuel economy improvement without compromising emissions regulations. One technique is the precise control of air/fuel ratio, which requires the measurement of instantaneous fuel consumption. Measurement accuracy and repeatability for fuel rate is the key to successfully controlling the air/fuel ratio and... View full abstract»

• ### Feature Extraction With Deep Neural Networks by a Generalized Discriminant Analysis

Publication Year: 2012, Page(s):596 - 608
Cited by:  Papers (43)
| | PDF (9019 KB) | HTML

We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and t... View full abstract»

• ### Fast and Efficient Second-Order Method for Training Radial Basis Function Networks

Publication Year: 2012, Page(s):609 - 619
Cited by:  Papers (43)
| | PDF (992 KB) | HTML

This paper proposes an improved second order (ISO) algorithm for training radial basis function (RBF) networks. Besides the traditional parameters, including centers, widths and output weights, the input weights on the connections between input layer and hidden layer are also adjusted during the training process. More accurate results can be obtained by increasing variable dimensions. Initial cent... View full abstract»

• ### General Robot Kinematics Decomposition Without Intermediate Markers

Publication Year: 2012, Page(s):620 - 630
Cited by:  Papers (8)
| | PDF (956 KB) | HTML

The calibration of serial manipulators with high numbers of degrees of freedom by means of machine learning is a complex and time-consuming task. With the help of a simple strategy, this complexity can be drastically reduced and the speed of the learning procedure can be increased. When the robot is virtually divided into shorter kinematic chains, these subchains can be learned separately and henc... View full abstract»

• ### Feature Extraction for Change-Point Detection Using Stationary Subspace Analysis

Publication Year: 2012, Page(s):631 - 643
Cited by:  Papers (18)
| | PDF (1939 KB) | HTML

Detecting changes in high-dimensional time series is difficult because it involves the comparison of probability densities that need to be estimated from finite samples. In this paper, we present the first feature extraction method tailored to change-point detection, which is based on an extended version of stationary subspace analysis. We reduce the dimensionality of the data to the most nonstati... View full abstract»

• ### Tangent Hyperplane Kernel Principal Component Analysis for Denoising

Publication Year: 2012, Page(s):644 - 656
Cited by:  Papers (5)
| | PDF (1568 KB) | HTML

Kernel principal component analysis (KPCA) is a method widely used for denoising multivariate data. Using geometric arguments, we investigate why a projection operation inherent to all existing KPCA denoising algorithms can sometimes cause very poor denoising. Based on this, we propose a modification to the projection operation that remedies this problem and can be incorporated into any of the exi... View full abstract»

• ### Modified Kolmogorov's Neural Network in the Identification of Hammerstein and Wiener Systems

Publication Year: 2012, Page(s):657 - 662
Cited by:  Papers (10)
| | PDF (336 KB) | HTML

This brief deals with the possibilities of using the modified Kolmogorov's neural network for the identification of non-linear dynamic systems, among them the Wiener and Hammerstein systems. The algorithm of training the network is simple, well convergent and with a small error of approximation. The modified neural network is characterized by a simple computer algorithm; it also omits complicated ... View full abstract»

• ### Mode and Delay-Dependent Adaptive Exponential Synchronization in$p$th Moment for Stochastic Delayed Neural Networks With Markovian Switching

Publication Year: 2012, Page(s):662 - 668
Cited by:  Papers (68)
| | PDF (300 KB) | HTML

In this brief, the analysis problem of the mode and delay-dependent adaptive exponential synchronization in th moment is considered for stochastic delayed neural networks with Markovian switching. By utilizing a new nonnegative function and the -matrix approach, several sufficient conditions to ensure the mode and delay-dependent adaptive exponential synchronization in th moment for stochastic del... View full abstract»

• ### A Priori Guaranteed Evolution Within the Neural Network Approximation Set and Robustness Expansion via Prescribed Performance Control

Publication Year: 2012, Page(s):669 - 675
Cited by:  Papers (25)
| | PDF (613 KB) | HTML

A neuroadaptive control scheme for strict feedback systems is designed, which is capable of achieving prescribed performance guarantees for the output error while keeping all closed-loop signals bounded, despite the presence of unknown system nonlinearities and external disturbances. The aforementioned properties are induced without resorting to a special initialization procedure or a tricky contr... View full abstract»

• ### Analysis on the Convergence Time of Dual Neural Network-Based$k{\rm WTA}$

Publication Year: 2012, Page(s):676 - 682
Cited by:  Papers (15)
| | PDF (872 KB) | HTML

A k-winner-take-all (kWTA) network is able to find out the k largest numbers from n inputs. Recently, a dual neural network (DNN) approach was proposed to implement the kWTA process. Compared to the conventional approach, the DNN approach has much less number of interconnections. A rough upper bound on the convergence time of the DNN-kWTA model, which is expressed in terms of input variables, was ... View full abstract»

• ### Reducing the Number of Support Vectors of SVM Classifiers Using the Smoothed Separable Case Approximation

Publication Year: 2012, Page(s):682 - 688
Cited by:  Papers (44)
| | PDF (714 KB) | HTML

In this brief, we propose a new method to reduce the number of support vectors of support vector machine (SVM) classifiers. We formulate the approximation of an SVM solution as a classification problem that is separable in the feature space. Due to the separability, the hard-margin SVM can be used to solve it. This approach, which we call the separable case approximation (SCA), is very similar to ... View full abstract»

• ### IEEE Computational Intelligence Society Information

Publication Year: 2012, Page(s): C3
| PDF (38 KB)
• ### IEEE Transactions on Neural Networks information for authors

Publication Year: 2012, Page(s): C4
| PDF (38 KB)

## Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

## Meet Our Editors

Editor-in-Chief
Haibo He
Dept. of Electrical, Computer, and Biomedical Engineering
University of Rhode Island
Kingston, RI 02881, USA
ieeetnnls@gmail.com