By Topic

IEEE Transactions on Neural Networks

Issue 1 • Jan. 1998

Filter Results

Displaying Results 1 - 25 of 27
  • Elements Of Artificial Neural Networks [Book Reviews]

    Publication Year: 1998, Page(s):234 - 235
    Cited by:  Papers (1)
    Request permission for commercial reuse | PDF file iconPDF (17 KB)
    Freely Available from IEEE
  • Neural Network Analysis, Architectures And Applications [Books in Brief]

    Publication Year: 1998, Page(s): 236
    Request permission for commercial reuse | PDF file iconPDF (6 KB)
    Freely Available from IEEE
  • Incremental communication for multilayer neural networks: error analysis

    Publication Year: 1998, Page(s):68 - 82
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (484 KB)

    Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its pre... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global convergence of Oja's subspace algorithm for principal component extraction

    Publication Year: 1998, Page(s):58 - 67
    Cited by:  Papers (40)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (528 KB)

    Oja's principal subspace algorithm is a well-known and powerful technique for learning and tracking principal information in time series. A thorough investigation of the convergence property of Oja's algorithm is undertaken in this paper. The asymptotic convergence rates of the algorithm is discovered. The dependence of the algorithm on its initial weight matrix and the singularity of the data cov... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A discrete dynamics model for synchronization of pulse-coupled oscillators

    Publication Year: 1998, Page(s):51 - 57
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (160 KB)

    Biological information processing systems employ a variety of feature types. It has been postulated that oscillator synchronization is the mechanism for binding these features together to realize coherent perception. A discrete dynamic model of a coupled system of oscillators is presented. The network of oscillators converges to a state where subpopulations of cells become phase synchronized. It h... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive unsupervised extraction of one component of a linear mixture with a single neuron

    Publication Year: 1998, Page(s):123 - 138
    Cited by:  Papers (21)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (700 KB)

    Extracting one specific component of a linear mixture is to isolate it due to the observation of several mixtures of all the components. This is done in an unsupervised way, based on the sole knowledge that the components are independent. The classical solution is independent component analysis which extracts the components all at the same time. In this paper, given at least as many sensors as com... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Long-term attraction in higher order neural networks

    Publication Year: 1998, Page(s):42 - 50
    Cited by:  Papers (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (468 KB)

    Recent results on the memory storage capacity of higher order neural networks indicate a significant improvement compared to the limited capacity of the Hopfield model. However, such results have so far been obtained under the restriction that only a single iteration is allowed to converge. This paper presents a indirect convergence (long-term attraction) analysis of higher order neural networks. ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple descent cost competition: restorable self-organization and multimedia information processing

    Publication Year: 1998, Page(s):106 - 122
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (720 KB)

    Multiple descent cost competition is a composition of learning phases for minimizing a given measure of total performance, i.e., cost. In the first phase of descent cost learning, elements of source data are grouped. Simultaneously, a weight vector for minimal learning, (a winner), is found. Then, the winner and its partners are updated for further cost reduction. Therefore, two classes of self-or... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning in certainty-factor-based multilayer neural networks for classification

    Publication Year: 1998, Page(s):151 - 158
    Cited by:  Papers (13)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (196 KB)

    The computational framework of rule-based neural networks inherits from the neural network and the inference engine of an expert system. In one approach, the network activation function is based on the certainty factor (CF) model of MYCIN-like systems. In this paper, it is shown theoretically that the neural network using the CF-based activation function requires relatively small sample sizes for ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Limitations of nonlinear PCA as performed with generic neural networks

    Publication Year: 1998, Page(s):165 - 173
    Cited by:  Papers (58)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (276 KB)

    Kramer's (1991) nonlinear principal components analysis (NLPCA) neural networks are feedforward autoassociative networks with five layers. The third layer has fewer nodes than the input or output layers. This paper proposes a geometric interpretation for Kramer's method by showing that NLPCA fits a lower-dimensional curve or surface through the training data. The first three layers project observa... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-validation with active pattern selection for neural-network classifiers

    Publication Year: 1998, Page(s):35 - 41
    Cited by:  Papers (15)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (264 KB)

    We propose a new approach for leave-one-out cross-validation of neural-network classifiers called “cross-validation with active pattern selection” (CV/APS). In CV/APS, the contribution of the training patterns to network learning is estimated and this information is used for active selection of CV patterns. On the tested examples, the computational cost of CV can be drastically reduced... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A bootstrap evaluation of the effect of data splitting on financial time series

    Publication Year: 1998, Page(s):213 - 220
    Cited by:  Papers (31)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (172 KB)

    Exposes problems of the commonly used technique of splitting the available data into training, validation, and test sets that are held fixed, warns about drawing too strong conclusions from such static splits, and shows potential pitfalls of ignoring variability across splits. Using a bootstrap or resampling method, we compare the uncertainty in the solution stemming from the data splitting with n... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast training of recurrent networks based on the EM algorithm

    Publication Year: 1998, Page(s):11 - 26
    Cited by:  Papers (20)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (680 KB)

    In this work, a probabilistic model is established for recurrent networks. The expectation-maximization (EM) algorithm is then applied to derive a new fast training algorithm for recurrent networks through mean-field approximation. This new algorithm converts training a complicated recurrent network into training an array of individual feedforward neurons. These neurons are then trained via a line... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Guaranteed two-pass convergence for supervised and inferential learning

    Publication Year: 1998, Page(s):195 - 204
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (268 KB)

    We present a theoretical analysis of a version of the LAPART adaptive inferencing neural network. Our main result is a proof that the new architecture, called LAPART 2, converges in two passes through a fixed training set of inputs. We also prove that it does not suffer from template proliferation. For comparison, Georgiopoulos et al. (1994) have proved the upper bound n-1 on the number of passes ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compensatory neurofuzzy systems with fast learning algorithms

    Publication Year: 1998, Page(s):83 - 105
    Cited by:  Papers (84)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (596 KB)

    In this paper, a new adaptive fuzzy reasoning method using compensatory fuzzy operators is proposed to make a fuzzy logic system more adaptive and more effective. Such a compensatory fuzzy logic system is proved to be a universal approximator. The compensatory neural fuzzy networks built by both control-oriented fuzzy neurons and decision-oriented fuzzy neurons cannot only adaptively adjust fuzzy ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MART: a multichannel ART-based neural network

    Publication Year: 1998, Page(s):139 - 150
    Cited by:  Papers (17)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (312 KB)

    This paper describes MART, an ART-based neural network for adaptive classification of multichannel signal patterns without prior supervised learning. Like other ART-based classifiers, MART is especially suitable for situations in which not even the number of pattern categories to be distinguished is known a priori; its novelty lies in its truly multichannel orientation, especially its ability to q... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A smart pixel-based feedforward neural network

    Publication Year: 1998, Page(s):159 - 164
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (136 KB)

    A novel smart pixel-based neural network was realized experimentally. The matrix multiplication is split into positive and negative components and computed optically. The necessary subtraction, binarization, and transmission of the resulting matrices is accomplished via a prototype smart pixel spatial light modulator. The result is a neural network that performs truly parallel computation without ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantizing for minimum average misclassification risk

    Publication Year: 1998, Page(s):174 - 182
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (304 KB)

    In pattern classification, a decision rule is a labeled partition of the observation space, where labels represent classes. A way to establish a decision rule is to attach a label to each code vector of a vector quantizer (VQ). When a labeled VQ is adopted as a classifier, we have to design it in such a way that high classification performance is obtained by a given number of code vectors. In this... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability analysis for neural dynamics with time-varying delays

    Publication Year: 1998, Page(s):221 - 223
    Cited by:  Papers (78)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (136 KB)

    By using the usual additive neural-network model, a delay-independent stability criterion for neural dynamics with perturbations of time-varying delays is derived. We extend previously known results obtained by Gopalsamy and He (1994) to the time varying delay case, and present decay estimates of solutions of neural networks. The asymptotic stability is global in the state space of neuronal activa... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A direct adaptive neural-network control for unknown nonlinear systems and its application

    Publication Year: 1998, Page(s):27 - 34
    Cited by:  Papers (131)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (264 KB)

    In this paper a direct adaptive neural-network control strategy for unknown nonlinear systems is presented. The system considered is described by an unknown NARMA model, and a feedforward neural network is used to learn the system. Taking the neural network as a neural model of the system, control signals are directly obtained by minimizing either the instant difference or the cumulative differenc... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Glove-TalkII-a neural-network interface which maps gestures to parallel formant speech synthesizer controls

    Publication Year: 1998, Page(s):205 - 212
    Cited by:  Papers (38)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (152 KB)

    Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volu... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A CMOS binary pattern classifier based on Parzen's method

    Publication Year: 1998, Page(s):2 - 10
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (352 KB)

    Biological circuitry in the brain that has been associated with the Parzen method of classification inspired an analog CMOS binary pattern classifier. The circuitry resides on three separate chips. The first chip computes the closeness of a test vector to each training vector stored on the chip where “vector closeness” is defined as the number of bits two vectors have in common above s... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and design of primal-dual assignment networks

    Publication Year: 1998, Page(s):183 - 194
    Cited by:  Papers (15)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (456 KB)

    The assignment problem is an archetypical combinatorial optimization problem having widespread applications. This paper presents two recurrent neural networks, a continuous-time one and a discrete-time one, for solving the assignment problem. Because the proposed recurrent neural networks solve the primal and dual assignment problems simultaneously, they are called primal-dual assignment networks.... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions

    Publication Year: 1998, Page(s):224 - 229
    Cited by:  Papers (177)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (220 KB)

    It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (xi,ti) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen “almost” arbitrarily. However, these results have been obtained for the case when the activation func... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Doubly stochastic Poisson processes in artificial neural learning

    Publication Year: 1998, Page(s):229 - 231
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (96 KB)

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope