By Topic

IEEE Transactions on Neural Networks

Issue 6 • Date Nov. 2001

Filter Results

Displaying Results 1 - 25 of 36
  • Advances in independent component analysis [Book Review]

    Publication Year: 2001, Page(s): 1547
    Cited by:  Papers (2)
    Request permission for commercial reuse | PDF file iconPDF (10 KB) | HTML iconHTML
    Freely Available from IEEE
  • Author index

    Publication Year: 2001, Page(s):1548 - 1552
    Request permission for commercial reuse | PDF file iconPDF (52 KB)
    Freely Available from IEEE
  • Subject index

    Publication Year: 2001, Page(s):1552 - 1564
    Request permission for commercial reuse | PDF file iconPDF (97 KB)
    Freely Available from IEEE
  • Hopfield neural networks for affine invariant matching

    Publication Year: 2001, Page(s):1400 - 1410
    Cited by:  Papers (49)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (605 KB) | HTML iconHTML

    The affine transformation, which consists of rotation, translation, scaling, and shearing transformations, can be considered as an approximation to the perspective transformation. Therefore, it is very important to find an effective means for establishing point correspondences under affine transformation in many applications. In this paper, we consider the point correspondence problem as a subgrap... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new pruning heuristic based on variance analysis of sensitivity information

    Publication Year: 2001, Page(s):1386 - 1399
    Cited by:  Papers (94)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (255 KB) | HTML iconHTML

    Architecture selection is a very important aspect in the design of neural networks (NNs) to optimally tune performance and computational complexity. Sensitivity analysis has been used successfully to prune irrelevant parameters from feedforward NNs. This paper presents a new pruning algorithm that uses the sensitivity analysis to quantify the relevance of input and hidden units. A new statistical ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A network of dynamically coupled chaotic maps for scene segmentation

    Publication Year: 2001, Page(s):1375 - 1385
    Cited by:  Papers (19)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (264 KB) | HTML iconHTML

    In this paper, a computational model for scene segmentation based on a network of dynamically coupled chaotic maps is proposed. Time evolutions of chaotic maps that correspond to an object in the given scene are synchronized with one another, while this synchronized evolution is desynchronized with respect to time evolution of chaotic maps corresponding to other objects in the scene. In this model... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The topographic organization and visualization of binary data using multivariate-Bernoulli latent variable models

    Publication Year: 2001, Page(s):1367 - 1374
    Cited by:  Papers (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (331 KB) | HTML iconHTML

    A nonlinear latent variable model for the topographic organization and subsequent visualization of multivariate binary data is presented. The generative topographic mapping (GTM) is a nonlinear factor analysis model for continuous data which assumes an isotropic Gaussian noise model and performs uniform sampling from a two-dimensional (2-D) latent space. Despite the, success of the GTM when applie... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensitivity analysis of multilayer perceptron to input and weight perturbations

    Publication Year: 2001, Page(s):1358 - 1366
    Cited by:  Papers (50)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (221 KB) | HTML iconHTML

    An important issue in the design and implementation of a neural network is the sensitivity of its output to input and weight perturbations. In this paper, we discuss the sensitivity of the most popular and general feedforward neural networks-multilayer perceptron (MLP). The sensitivity is defined as the mathematical expectation of the output errors of the MLP due to input and weight perturbations ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed fault tolerance in optimal interpolative nets

    Publication Year: 2001, Page(s):1348 - 1357
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (174 KB) | HTML iconHTML

    The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault t... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multilayer self-organizing model for convex-hull computation

    Publication Year: 2001, Page(s):1341 - 1347
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (119 KB) | HTML iconHTML

    A self-organizing neural-network model is proposed for computation of the convex-hull of a given set of planar points. The network evolves in such a manner that it adapts itself to the hull-vertices of the convex-hull. The proposed network consists of three layers of processors. The bottom layer computes some angles which are passed to the middle layer. The middle layer is used for computation of ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LSTM recurrent networks learn simple context-free and context-sensitive languages

    Publication Year: 2001, Page(s):1333 - 1340
    Cited by:  Papers (47)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (209 KB) | HTML iconHTML

    Previous work on learning regular languages from exemplary training sequences showed that long short-term memory (LSTM) outperforms traditional recurrent neural networks (RNNs). We demonstrate LSTMs superior performance on context-free language benchmarks for RNNs, and show that it works even better than previous hardwired or highly specialized architectures. To the best of our knowledge, LSTM var... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-organizing maps, vector quantization, and mixture modeling

    Publication Year: 2001, Page(s):1299 - 1305
    Cited by:  Papers (75)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (234 KB) | HTML iconHTML

    Self-organizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive expectation-maximization (EM) algorithms for self-organizing maps with and without missing values. We compare self-organizing maps with the elastic-net approach and explain why the former is better suited for the visualization... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks

    Publication Year: 2001, Page(s):1314 - 1332
    Cited by:  Papers (31)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (440 KB) | HTML iconHTML

    Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Processing directed acyclic graphs with recursive neural networks

    Publication Year: 2001, Page(s):1464 - 1470
    Cited by:  Papers (13)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (157 KB) | HTML iconHTML

    Recursive neural networks are conceived for processing graphs and extend the well-known recurrent model for processing sequences. In Frasconi et al. (1998), recursive neural networks can deal only with directed ordered acyclic graphs (DOAGs), in which the children of any given node are ordered. While this assumption is reasonable in some applications, it introduces unnecessary constraints in other... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the convergence of the decomposition method for support vector machines

    Publication Year: 2001, Page(s):1288 - 1298
    Cited by:  Papers (94)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (253 KB) | HTML iconHTML

    The decomposition method is currently one of the major methods for solving support vector machines (SVM). Its convergence properties have not been fully understood. The general asymptotic convergence was first proposed by Chang et al. However, their working set selection does not coincide with existing implementation. A later breakthrough by Keerthi and Gilbert (2000, 2002) proved the convergence ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RBFNN-based hole identification system in conducting plates

    Publication Year: 2001, Page(s):1445 - 1454
    Cited by:  Papers (3)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (184 KB) | HTML iconHTML

    A neural-based signal processing system that exploits radial basis function neural network (RBFNN) is proposed to solve the problem of detecting and locating circular holes in conducting plates by means of nondestructive eddy currents testing. The capabilities of basic multilayer perceptron and radial basis function (RBF) schemes are first investigated. Since the achieved performance revealed insu... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • H-learning of layered neural networks

    Publication Year: 2001, Page(s):1265 - 1277
    Cited by:  Papers (22)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (377 KB) | HTML iconHTML

    Although the backpropagation (BP) scheme is widely used as a learning algorithm for multilayered neural networks, the learning speed of the BP algorithm to obtain acceptable errors is unsatisfactory in spite of some improvements such as introduction of a momentum factor and an adaptive learning rate in the weight adjustment. To solve this problem, a fast learning algorithm based on the extended Ka... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global convergence of delayed dynamical systems

    Publication Year: 2001, Page(s):1532 - 1536
    Cited by:  Papers (29)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (165 KB) | HTML iconHTML

    We discuss some delayed dynamical systems, investigating their stability and convergence in a critical case. To ensure the stability, the coefficients of the dynamical system must satisfy some inequalities. In most existing literatures, the restrictions on the coefficients are strict inequalities. The tough question is what will happen in the case (critical case) the strict inequalities are replac... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Qualitative analysis of a recurrent neural network for nonlinear continuously differentiable convex minimization over a nonempty closed convex subset

    Publication Year: 2001, Page(s):1521 - 1525
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (158 KB) | HTML iconHTML

    We investigate the qualitative properties of a recurrent neural network (RNN) for minimizing a nonlinear continuously differentiable and convex objective function over any given nonempty, closed, and convex subset which may be bounded or unbounded, by exploiting some key inequalities in mathematical programming. The global existence and boundedness of the solution of the RNN are proved when the ob... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compound binomial processes in neural integration

    Publication Year: 2001, Page(s):1505 - 1512
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (213 KB) | HTML iconHTML

    Explores some of the properties of stochastic digital signal processing in which the input signals are represented as sequences of Bernoulli events. The event statistics of the resulting stochastic process may be governed by compound binomial processes, depending upon how the individual input variables to a neural network are stochastically multiplexed. Similar doubly stochastic statistics can als... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An algorithmic approach to adaptive state filtering using recurrent neural networks

    Publication Year: 2001, Page(s):1411 - 1432
    Cited by:  Papers (48)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (500 KB) | HTML iconHTML

    Practical algorithms are presented for adaptive state filtering in nonlinear dynamic systems when the state equations are unknown. The state equations are constructively approximated using neural networks. The algorithms presented are based on the two-step prediction-update approach of the Kalman filter. The proposed algorithms make minimal assumptions regarding the underlying nonlinear dynamics a... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization properties of modular networks: implementing the parity function

    Publication Year: 2001, Page(s):1306 - 1313
    Cited by:  Papers (13)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (156 KB) | HTML iconHTML

    The parity function is one of the most used Boolean function for testing learning algorithms because both of its simple definition and its great complexity. We construct a family of modular architectures that implement the parity function in which, every member of the family can be characterized by the fan-in max of the network, i.e., the maximum number of connections that a neuron can receive. We... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind source separation by nonstationarity of variance: a cumulant-based approach

    Publication Year: 2001, Page(s):1471 - 1474
    Cited by:  Papers (49)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (64 KB) | HTML iconHTML

    Blind separation of source signals usually relies either on the nonGaussianity of the signals or on their linear autocorrelations. A third approach was introduced by Matsuoka et al. (1995), who showed that source separation can be performed by using the nonstationarity of the signals, in particular the nonstationarity of their variances. In this paper, we show how to interpret the nonstationarity ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neuromorphic VLSI device for implementing 2D selective attention systems

    Publication Year: 2001, Page(s):1455 - 1463
    Cited by:  Papers (33)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (247 KB) | HTML iconHTML

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems wi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Confidence estimation methods for neural networks: a practical comparison

    Publication Year: 2001, Page(s):1278 - 1287
    Cited by:  Papers (58)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (172 KB) | HTML iconHTML

    Feedforward neural networks, particularly multilayer perceptrons, are widely used in regression and classification tasks. A reliable and practical measure of prediction confidence is essential. In this work three alternative approaches to prediction confidence estimation are presented and compared. The three methods are the maximum likelihood, approximate Bayesian, and the bootstrap technique. We ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope