By Topic

IEEE Transactions on Neural Networks

Issue 6 • Date June 2010

Filter Results

Displaying Results 1 - 21 of 21
  • Table of contents

    Publication Year: 2010, Page(s): C1
    Request permission for commercial reuse | PDF file iconPDF (34 KB)
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2010, Page(s): C2
    Request permission for commercial reuse | PDF file iconPDF (38 KB)
    Freely Available from IEEE
  • File Access Prediction Using Neural Networks

    Publication Year: 2010, Page(s):869 - 882
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1070 KB) | HTML iconHTML

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neura... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sparse Approximation Through Boosting for Learning Large Scale Kernel Machines

    Publication Year: 2010, Page(s):883 - 894
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (764 KB) | HTML iconHTML

    Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subset of original data points also known as basis vectors, which are usually chosen one by one with a forward selection procedure based on some selection criteria. The computational complexity of several resultant algorithms scales as... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-Sustained Irregular Activity in 2-D Small-World Networks of Excitatory and Inhibitory Neurons

    Publication Year: 2010, Page(s):895 - 905
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1214 KB) | HTML iconHTML

    In this paper, we study the self-sustained irregular firing activity in 2-D small-world (SW) neural networks consisting of both excitatory and inhibitory neurons by computational modeling. For a proper proportion of unidirectional shortcuts, the stable self-sustained activity with irregular firing states indeed occurs in the considered network. By varying the shortcut density while keeping other s... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monotone and Partially Monotone Neural Networks

    Publication Year: 2010, Page(s):906 - 917
    Cited by:  Papers (8)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (718 KB) | HTML iconHTML

    In many classification and prediction problems it is known that the response variable depends on certain explanatory variables. Monotone neural networks can be used as powerful tools to build monotone models with better accuracy and lower variance compared to ordinary nonmonotone models. Monotonicity is usually obtained by putting constraints on the parameters of the network. In this paper, we wil... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New One-Layer Neural Network for Linear and Quadratic Programming

    Publication Year: 2010, Page(s):918 - 929
    Cited by:  Papers (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (619 KB) | HTML iconHTML

    In this paper, we present a new neural network for solving linear and quadratic programming problems in real time by introducing some new vectors. The proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem when the objective function is convex on the set defined by equality constraints. Compared with existing one-layer neura... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved Computation for Levenberg–Marquardt Training

    Publication Year: 2010, Page(s):930 - 937
    Cited by:  Papers (70)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (645 KB) | HTML iconHTML

    The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/l... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks

    Publication Year: 2010, Page(s):938 - 947
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (405 KB) | HTML iconHTML

    In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive in... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive FIR Neural Model for Centroid Learning in Self-Organizing Maps

    Publication Year: 2010, Page(s):948 - 960
    Cited by:  Papers (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2746 KB) | HTML iconHTML

    In this paper, a training method for the formation of topology preserving maps is introduced. The proposed approach presents a sequential formulation of the self-organizing map (SOM), which is based on a new model of the neuron, or processing unit. Each neuron acts as a finite impulse response (FIR) system, and the coefficients of the filters are adaptively estimated during the sequential learning... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Realization of the Conscience Mechanism in CMOS Implementation of Winner-Takes-All Self-Organizing Neural Networks

    Publication Year: 2010, Page(s):961 - 971
    Cited by:  Papers (16)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1409 KB) | HTML iconHTML

    This paper presents a complementary metal-oxide-semiconductor (CMOS) implementation of a conscience mechanism used to improve the effectiveness of learning in the winner-takes-all (WTA) artificial neural networks (ANNs) realized at the transistor level. This mechanism makes it possible to eliminate the effect of the so-called ??dead neurons,?? which do not take part in the learning phase competiti... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel Maximum-Margin Training Algorithms for Supervised Neural Networks

    Publication Year: 2010, Page(s):972 - 984
    Cited by:  Papers (15)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (952 KB) | HTML iconHTML

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A BCM Theory of Meta-Plasticity for Online Self-Reorganizing Fuzzy-Associative Learning

    Publication Year: 2010, Page(s):985 - 1003
    Cited by:  Papers (8)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2600 KB) | HTML iconHTML

    Self-organizing neurofuzzy approaches have matured in their online learning of fuzzy-associative structures under time-invariant conditions. To maximize their operative value for online reasoning, these self-sustaining mechanisms must also be able to reorganize fuzzy-associative knowledge in real-time dynamic environments. Hence, it is critical to recognize that they would require self-reorganizat... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Infinite Hidden Markov Random Field Model

    Publication Year: 2010, Page(s):1004 - 1014
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (933 KB) | HTML iconHTML

    Hidden Markov random field (HMRF) models are widely used for image segmentation, as they appear naturally in problems where a spatially constrained clustering scheme is asked for. A major limitation of HMRF models concerns the automatic selection of the proper number of their states, i.e., the number of region clusters derived by the image segmentation procedure. Existing methods, including likeli... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inference From Aging Information

    Publication Year: 2010, Page(s):1015 - 1020
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (232 KB) | HTML iconHTML

    For many learning tasks the duration of the data collection can be greater than the time scale for changes of the underlying data distribution. The question we ask is how to include the information that data are aging. Ad hoc methods to achieve this include the use of validity windows that prevent the learning machine from making inferences based on old data. This introduces the problem of how to ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discriminant Analysis for Fast Multiclass Data Classification Through Regularized Kernel Function Approximation

    Publication Year: 2010, Page(s):1020 - 1029
    Cited by:  Papers (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1058 KB) | HTML iconHTML

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduce... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2011 International Joint Conference on Neural Networks

    Publication Year: 2010, Page(s): 1030
    Request permission for commercial reuse | PDF file iconPDF (870 KB)
    Freely Available from IEEE
  • 2010 IEEE World Congress on Computational Intelligence

    Publication Year: 2010, Page(s): 1031
    Request permission for commercial reuse | PDF file iconPDF (2434 KB)
    Freely Available from IEEE
  • Scitopia.org [advertisement]

    Publication Year: 2010, Page(s): 1032
    Request permission for commercial reuse | PDF file iconPDF (269 KB)
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2010, Page(s): C3
    Request permission for commercial reuse | PDF file iconPDF (37 KB)
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks Information for authors

    Publication Year: 2010, Page(s): C4
    Request permission for commercial reuse | PDF file iconPDF (39 KB)
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope