By Topic

Neural Networks, IEEE Transactions on

Issue 5 • Date Sep 1991

Filter Results

Displaying Results 1 - 9 of 9
  • Worst-case convergence times for Hopfield memories

    Page(s): 533 - 535
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    The worst-case upper bound on the convergence time of Hopfield associative memories is improved to half of its previously known value. Also, the consequences of allowing `don't know' bits in both the input and the output are considered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic competitive learning

    Page(s): 522 - 529
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (692 KB)  

    Competitive learning systems are examined as stochastic dynamical systems. This includes continuous and discrete formulations of unsupervised, supervised, and differential competitive learning systems. These systems estimate an unknown probability density function from random pattern samples and behave as adaptive vector quantizers. Synaptic vectors, in feedforward competitive neural networks, quantize the pattern space and converge to pattern class centroids or local probability maxima. A stochastic Lyapunov argument shows that competitive synaptic vectors converge to centroids exponentially quickly and reduces competitive learning to stochastic gradient descent. Convergence does not depend on a specific dynamical model of how neuronal activations change. These results extend to competitive estimation of local covariances and higher order statistics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new back-propagation algorithm with coupled neuron

    Page(s): 535 - 538
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    A novel neuron model and its learning algorithm are presented. They provide a novel approach for speeding up convergence in the learning of layered neural networks and for training networks of neurons with a nondifferentiable output function by using the gradient descent method. The neuron is called a saturating linear coupled neuron (sl-CONE). From simulation results, it is shown that the sl-CONE has a high convergence rate in learning compared with the conventional backpropagation algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Invariance and neural nets

    Page(s): 498 - 508
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1600 KB)  

    Application of neural nets to invariant pattern recognition is considered. The authors study various techniques for obtaining this invariance with neural net classifiers and identify the invariant-feature technique as the most suitable for current neural classifiers. A novel formulation of invariance in terms of constraints on the feature values leads to a general method for transforming any given feature space so that it becomes invariant to specified transformations. A case study using range imagery is used to exemplify these ideas, and good performance is obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Equilibrium characterization of dynamical neural networks and a systematic synthesis procedure for associative memories

    Page(s): 509 - 521
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1072 KB)  

    Several novel results concerning the characterization of the equilibrium conditions of a continuous-time dynamical neural network model and a systematic procedure for synthesizing associative memory networks with nonsymmetrical interconnection matrices are presented. The equilibrium characterization focuses on the exponential stability and instability properties of the network equilibria and on equilibrium confinement, viz., ensuring the uniqueness of an equilibrium in a specific region of the state space. While the equilibrium confinement result involves a simple test, the stability results given obtain explicit estimates of the degree of exponential stability and the regions of attraction of the stable equilibrium points. Using these results as valuable guidelines, a systematic synthesis procedure for constructing a dynamical neural network that stores a given set of vectors as the stable equilibrium points is developed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CMAC-based adaptive critic self-learning control

    Page(s): 530 - 533
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB)  

    A technique that integrates the cerebellar model articulation controller (CMAC) into a self-learning control scheme developed by A.G. Barto et al. (IEEE Trans. Syst. Man., Cybern., vol.SMC-13, p.834-46, Sept./Oct. 1983) is presented. Instead of reserving one input line (as a memory) for each quantized state, the integrated technique distributively stores learned information; this reduces the required memory and makes the self-learning control scheme applicable to problems of larger size. CMAC's capability with regard to information interpolation also helps improve the learning speed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An information criterion for optimal neural network selection

    Page(s): 490 - 497
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB)  

    The choice of an optimal neural network design for a given problem is addressed. A relationship between optimal network design and statistical model identification is described. A derivative of Akaike's information criterion (AIC) is given. This modification yields an information statistic which can be used to objectively select a `best' network for binary classification problems. The technique can be extended to problems with an arbitrary number of classes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence of learning algorithms with constant learning rates

    Page(s): 484 - 489
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    The behavior of neural network learning algorithms with a small, constant learning rate, ε, in stationary, random input environments is investigated. It is rigorously established that the sequence of weight estimates can be approximated by a certain ordinary differential equation, in the sense of weak convergence of random processes as ε tends to zero. As applications, backpropagation in feedforward architectures and some feature extraction algorithms are studied in more detail View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network approach to a Bayesian statistical decision problem

    Page(s): 538 - 540
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    Generalized mean-squared error (GMSE) objective functions are proposed that can be used in neural networks to yield a Bayes optimal solution to a statistical decision problem characterized by a generic loss function View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope