# IEEE Transactions on Neural Networks

## Filter Results

Displaying Results 1 - 20 of 20

Publication Year: 2011, Page(s): C1
| PDF (116 KB)
• ### IEEE Transactions on Neural Networks publication information

Publication Year: 2011, Page(s): C2
| PDF (39 KB)
• ### Identification of Extended Hammerstein Systems Using Dynamic Self-Optimizing Neural Networks

Publication Year: 2011, Page(s):1169 - 1179
Cited by:  Papers (25)
| | PDF (563 KB) | HTML

In this paper, a new dynamic self-optimizing neural network (DSONN) with online adjusting hidden layer and weights is proposed for a class of extended Hammerstein systems with non-Gaussian noises. Input vector to the network is first determined by means of system order estimation using a designated input signal. Then the hidden layer is generated online, which consists of a growing step according ... View full abstract»

• ### Global Asymptotic Stability for a Class of Generalized Neural Networks With Interval Time-Varying Delays

Publication Year: 2011, Page(s):1180 - 1192
Cited by:  Papers (67)
| | PDF (353 KB) | HTML

This paper is concerned with global asymptotic stability for a class of generalized neural networks (NNs) with interval time-varying delays, which include two classes of fundamental NNs, i.e., static neural networks (SNNs) and local field neural networks (LFNNs), as their special cases. Some novel delay-independent and delay-dependent stability criteria are derived. These stability criteria are ap... View full abstract»

• ### Quaternion-Valued Nonlinear Adaptive Filtering

Publication Year: 2011, Page(s):1193 - 1206
Cited by:  Papers (51)
| | PDF (1153 KB) | HTML

A class of nonlinear quaternion-valued adaptive filtering algorithms is proposed based on locally analytic nonlinear activation functions. To circumvent the stringent standard analyticity conditions which are prohibitive to the development of nonlinear adaptive quaternion-valued estimation models, we use the fact that stochastic gradient learning algorithms require only local analyticity at the op... View full abstract»

• ### Semisupervised Generalized Discriminant Analysis

Publication Year: 2011, Page(s):1207 - 1217
Cited by:  Papers (21)
| | PDF (282 KB) | HTML

Generalized discriminant analysis (GDA) is a commonly used method for dimensionality reduction. In its general form, it seeks a nonlinear projection that simultaneously maximizes the between-class dissimilarity and minimizes the within-class dissimilarity to increase class separability. In real-world applications where labeled data are scarce, GDA may not work very well. However, unlabeled data ar... View full abstract»

• ### Non-Negative Patch Alignment Framework

Publication Year: 2011, Page(s):1218 - 1230
Cited by:  Papers (130)
| | PDF (941 KB) | HTML

In this paper, we present a non-negative patch alignment framework (NPAF) to unify popular non-negative matrix factorization (NMF) related dimension reduction algorithms. It offers a new viewpoint to better understand the common property of different NMF algorithms. Although multiplicative update rule (MUR) can solve NPAF and is easy to implement, it converges slowly. Thus, we propose a fast gradi... View full abstract»

• ### Consensus Analysis of Multiagent Networks via Aggregated and Pinning Approaches

Publication Year: 2011, Page(s):1231 - 1240
Cited by:  Papers (30)
| | PDF (664 KB) | HTML

In this paper, the consensus problem of multiagent nonlinear directed networks (MNDNs) is discussed in the case that a MNDN does not have a spanning tree to reach the consensus of all nodes. By using the Lie algebra theory, a linear node-and-node pinning method is proposed to achieve a consensus of a MNDN for all nonlinear functions satisfying a given set of conditions. Based on some optimal algor... View full abstract»

• ### Practical Conditions for Effectiveness of the Universum Learning

Publication Year: 2011, Page(s):1241 - 1255
Cited by:  Papers (18)
| | PDF (1220 KB) | HTML

Many applications of machine learning involve analysis of sparse high-dimensional data, in which the number of input features is larger than the number of data samples. Standard inductive learning methods may not be sufficient for such data, and this provides motivation for nonstandard learning settings. This paper investigates a new learning methodology called learning through contradictions or U... View full abstract»

• ### $k$-NS: A Classifier by the Distance to the Nearest Subspace

Publication Year: 2011, Page(s):1256 - 1268
Cited by:  Papers (16)
| | PDF (537 KB) | HTML

To improve the classification performance of k-NN, this paper presents a classifier, called k -NS, based on the Euclidian distances from a query sample to the nearest subspaces. Each nearest subspace is spanned by k nearest samples of a same class. A simple discriminant is derived to calculate the distances due to the geometric meaning of the Grammian, and the calculation stab... View full abstract»

• ### Support Vector Machines With Constraints for Sparsity in the Primal Parameters

Publication Year: 2011, Page(s):1269 - 1283
Cited by:  Papers (3)
| | PDF (603 KB) | HTML

This paper introduces a new support vector machine (SVM) formulation to obtain sparse solutions in the primal SVM parameters, providing a new method for feature selection based on SVMs. This new approach includes additional constraints to the classical ones that drop the weights associated to those features that are likely to be irrelevant. A ν-SVM formulation has been used, where ν ... View full abstract»

• ### Rapid Detection of Small Oscillation Faults via Deterministic Learning

Publication Year: 2011, Page(s):1284 - 1296
Cited by:  Papers (21)
| | PDF (689 KB) | HTML

Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal... View full abstract»

• ### Convergence of Cyclic and Almost-Cyclic Learning With Momentum for Feedforward Neural Networks

Publication Year: 2011, Page(s):1297 - 1306
Cited by:  Papers (13)
| | PDF (311 KB) | HTML

Two backpropagation algorithms with momentum for feedforward neural networks with a single hidden layer are considered. It is assumed that the training samples are supplied to the network in a cyclic or an almost-cyclic fashion in the learning procedure, i.e., in each training cycle, each sample of the training set is supplied in a fixed or a stochastic order respectively to the network exactly on... View full abstract»

• ### $ell_{p}-ell_{q}$ Penalty for Sparse Linear and Sparse Multiple Kernel Multitask Learning

Publication Year: 2011, Page(s):1307 - 1320
Cited by:  Papers (35)
| | PDF (539 KB) | HTML

Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on ℓp-ℓq (wi... View full abstract»

• ### Cerebellar Input Configuration Toward Object Model Abstraction in Manipulation Tasks

Publication Year: 2011, Page(s):1321 - 1328
Cited by:  Papers (13)
| | PDF (562 KB) | HTML

It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in t... View full abstract»

• ### Adaptive Neural Output Feedback Controller Design With Reduced-Order Observer for a Class of Uncertain Nonlinear SISO Systems

Publication Year: 2011, Page(s):1328 - 1334
Cited by:  Papers (103)
| | PDF (291 KB) | HTML

An adaptive output feedback control is studied for uncertain nonlinear single-input-single-output systems with partial unmeasured states. In the scheme, a reduced-order observer (ROO) is designed to estimate those unmeasured states. By employing radial basis function neural networks and incorporating the ROO into a new backstepping design, an adaptive output feedback controller is constructively d... View full abstract»

• ### Minimising Added Classification Error Using Walsh Coefficients

Publication Year: 2011, Page(s):1334 - 1339
Cited by:  Papers (4)
| | PDF (401 KB) | HTML

Two-class supervised learning in the context of a classifier ensemble may be formulated as learning an incompletely specified Boolean function, and the associated Walsh coefficients can be estimated without the knowledge of the unspecified patterns. Using an extended version of the Tumer-Ghosh model, the relationship between added classification error and second-order Walsh coefficients is establi... View full abstract»

• ### IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology

Publication Year: 2011, Page(s): 1340
| PDF (820 KB)
• ### IEEE Computational Intelligence Society Information

Publication Year: 2011, Page(s): C3
| PDF (37 KB)
• ### IEEE Transactions on Neural Networks Information for authors

Publication Year: 2011, Page(s): C4
| PDF (37 KB)

## Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope