# IEEE Transactions on Neural Networks

## Filter Results

Displaying Results 1 - 19 of 19

Publication Year: 2011, Page(s): C1
| PDF (113 KB)
• ### IEEE Transactions on Neural Networks publication information

Publication Year: 2011, Page(s): C2
| PDF (39 KB)
• ### Incremental Learning of Concept Drift in Nonstationary Environments

Publication Year: 2011, Page(s):1517 - 1531
Cited by:  Papers (185)  |  Patents (1)
| | PDF (1191 KB) | HTML

We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn++.NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments... View full abstract»

• ### Textual and Visual Content-Based Anti-Phishing: A Bayesian Approach

Publication Year: 2011, Page(s):1532 - 1546
Cited by:  Papers (40)
| | PDF (634 KB) | HTML

A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exp... View full abstract»

• ### Observer Design for Switched Recurrent Neural Networks: An Average Dwell Time Approach

Publication Year: 2011, Page(s):1547 - 1556
Cited by:  Papers (75)
| | PDF (469 KB) | HTML

This paper is concerned with the problem of observer design for switched recurrent neural networks with time-varying delay. The attention is focused on designing the full-order observers that guarantee the global exponential stability of the error dynamic system. Based on the average dwell time approach and the free-weighting matrix technique, delay-dependent sufficient conditions are developed fo... View full abstract»

• ### Chaotic Simulated Annealing by a Neural Network With a Variable Delay: Design and Application

Publication Year: 2011, Page(s):1557 - 1565
Cited by:  Papers (10)
| | PDF (289 KB) | HTML

In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) app... View full abstract»

• ### Passivity Analysis for Discrete-Time Stochastic Markovian Jump Neural Networks With Mixed Time Delays

Publication Year: 2011, Page(s):1566 - 1575
Cited by:  Papers (232)
| | PDF (402 KB) | HTML

In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays. The mixed time delays consist of both discrete and distributed delays. The Markov chain in the underlying neural networks is finite piecewise homogeneous. By introducing a Lyapunov functional that accounts for the mixed time delays, a delay-depen... View full abstract»

• ### Analysis of Fixed-Point and Coordinate Descent Algorithms for Regularized Kernel Methods

Publication Year: 2011, Page(s):1576 - 1587
Cited by:  Papers (4)
| | PDF (283 KB) | HTML

In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on ... View full abstract»

• ### A New Formulation for Feedforward Neural Networks

Publication Year: 2011, Page(s):1588 - 1598
Cited by:  Papers (47)
| | PDF (1079 KB) | HTML

Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a de... View full abstract»

• ### Neural Networks-Based Adaptive Control for Nonlinear Time-Varying Delays Systems With Unknown Control Direction

Publication Year: 2011, Page(s):1599 - 1612
Cited by:  Papers (42)
| | PDF (775 KB) | HTML

This paper investigates a neural network (NN) state observer-based adaptive control for a class of time-varying delays nonlinear systems with unknown control direction. An adaptive neural memoryless observer, in which the knowledge of time-delay is not used, is designed to estimate the system states. Furthermore, by applying the property of the function tanh2(ϑ/ε)/ϑ... View full abstract»

• ### Nonlinear Regularization Path for Quadratic Loss Support Vector Machines

Publication Year: 2011, Page(s):1613 - 1625
Cited by:  Papers (3)
| | PDF (652 KB) | HTML

Regularization path algorithms have been proposed to deal with model selection problem in several machine learning approaches. These algorithms allow computation of the entire path of solutions for every value of regularization parameter using the fact that their solution paths have piecewise linear form. In this paper, we extend the applicability of regularization path algorithm to a class of lea... View full abstract»

• ### Minimum-Volume-Constrained Nonnegative Matrix Factorization: Enhanced Ability of Learning Parts

Publication Year: 2011, Page(s):1626 - 1637
Cited by:  Papers (28)
| | PDF (680 KB) | HTML

Nonnegative matrix factorization (NMF) with minimum-volume-constraint (MVC) is exploited in this paper. Our results show that MVC can actually improve the sparseness of the results of NMF. This sparseness is L0-norm oriented and can give desirable results even in very weak sparseness situations, thereby leading to the significantly enhanced ability of learning parts of NMF. The close re... View full abstract»

• ### Embedding Prior Knowledge Within Compressed Sensing by Neural Networks

Publication Year: 2011, Page(s):1638 - 1649
Cited by:  Papers (7)
| | PDF (1160 KB) | HTML

In the compressed sensing framework, different algorithms have been proposed for sparse signal recovery from an incomplete set of linear measurements. The most known can be classified into two categories: ℓ1 norm minimization-based algorithms and ℓ0 pseudo-norm minimization with greedy matching pursuit algorithms. In this paper, we propose a modified matching pu... View full abstract»

• ### Efficient Revised Simplex Method for SVM Training

Publication Year: 2011, Page(s):1650 - 1661
Cited by:  Papers (10)
| | PDF (329 KB) | HTML

Existing active set methods reported in the literature for support vector machine (SVM) training must contend with singularities when solving for the search direction. When a singularity is encountered, an infinite descent direction can be carefully chosen that avoids cycling and allows the algorithm to converge. However, the algorithm implementation is likely to be more complex and less computati... View full abstract»

• ### Stability and $L_{2}$ Performance Analysis of Stochastic Delayed Neural Networks

Publication Year: 2011, Page(s):1662 - 1668
Cited by:  Papers (22)
| | PDF (223 KB) | HTML

This brief focuses on the robust mean-square exponential stability and L2 performance analysis for a class of uncertain time-delay neural networks perturbed by both additive and multiplicative stochastic noises. New mean-square exponential stability and L2 performance criteria are developed based on the delay partition Lyapunov-Krasovskii functional method and g... View full abstract»

• ### Deep Learning Regularized Fisher Mappings

Publication Year: 2011, Page(s):1668 - 1675
Cited by:  Papers (16)
| | PDF (229 KB) | HTML

For classification tasks, it is always desirable to extract features that are most effective for preserving class separability. In this brief, we propose a new feature extraction method called regularized deep Fisher mapping (RDFM), which learns an explicit mapping from the sample space to the feature space using a deep neural network to enhance the separability of features according to the Fisher... View full abstract»

• ### Zhang Neural Network Versus Gradient Neural Network for Solving Time-Varying Linear Inequalities

Publication Year: 2011, Page(s):1676 - 1684
Cited by:  Papers (40)
| | PDF (601 KB) | HTML

By following Zhang design method, a new type of recurrent neural network [i.e., Zhang neural network (ZNN)] is presented, investigated, and analyzed for online solution of time-varying linear inequalities. Theoretical analysis is given on convergence properties of the proposed ZNN model. For comparative purposes, the conventional gradient neural network is developed and exploited for solving onlin... View full abstract»

• ### IEEE Computational Intelligence Society Information

Publication Year: 2011, Page(s): C3
| PDF (37 KB)
• ### IEEE Transactions on Neural Networks Information for authors

Publication Year: 2011, Page(s): C4
| PDF (37 KB)

## Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope