By Topic

Neural Networks, IEEE Transactions on

Issue 1 • Date Mar 1990

Filter Results

Displaying Results 1 - 13 of 13
  • Survey of neural network technology for automatic target recognition

    Page(s): 28 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1744 KB)  

    A review is presented of ATR (automatic target recognition), and some of the highlights of neural network technology developments that have the potential for making a significant impact on ATR are presented. In particular, neural network technology developments in the areas of collective computation, learning algorithms, expert systems, and neurocomputer hardware could provide crucial tools for developing improved algorithms and computational hardware for ATR. The discussion covers previous ATR system efforts. ATR issues and needs, early vision and collective computation, learning and adaptation for ATR, feature extraction, higher vision and expert systems, and neurocomputer hardware View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-organizing network for optimum supervised learning

    Page(s): 100 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1040 KB)  

    A new algorithm called the self-organizing neural network (SONN) is introduced. Its use is demonstrated in a system identification task. The algorithm constructs a network, chooses the node functions, and adjusts the weights. It is compared to the backpropagation algorithm in the identification of the chaotic time series. The results show that SONN constructs a simpler, more accurate model, requiring less training data and fewer epochs. The algorithm can also be applied as a classifier View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ATM communications network control by neural networks

    Page(s): 122 - 130
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (900 KB)  

    A learning method that uses neural networks for service quality control in the asynchronous transfer mode (ATM) communications network is described. Because the precise characteristics of the source traffic are not known and the service quality requirements change over time, building an efficient network controller which can control the network traffic is a difficult task. The proposed ATM network controller uses backpropagation neural networks for learning the relations between the offered traffic and service quality. The neural network is adaptive and easy to implement. A training data selection method called the leaky pattern table method is proposed to learn precise relations. The performance of the proposed controller is evaluated by simulation of basic call admission models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two coding strategies for bidirectional associative memory

    Page(s): 81 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (772 KB)  

    Enhancements of the encoding strategy of a discrete bidirectional associative memory (BAM) reported by B. Kosko (1987) are presented. There are two major concepts in this work: multiple training, which can be guaranteed to achieve recall of a single trained pair under suitable initial conditions of data, and dummy augmentation, which can be guaranteed to achieve recall of all trained pairs if attaching dummy data to the training pairs is allowable. In representative computer simulations, multiple training has been shown to lead to an improvement over the original Kosko strategy for recall of multiple pairs as well. A sufficient condition for a correlation matrix to make the energies of the training pairs be local minima is discussed. The use of multiple training and dummy augmentation concepts are illustrated, and theorems underlying the results are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-free distributed learning

    Page(s): 58 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1116 KB)  

    Model-free learning for synchronous and asynchronous quasi-static networks is presented. The network weights are continuously perturbed, while the time-varying performance index is measured and correlated with the perturbation signals; the correlation output determines the changes in the weights. The perturbation may be either via noise sources or orthogonal signals. The invariance to detailed network structure mitigates large variability between supposedly identical networks as well as implementation defects. This local, regular, and completely distributed mechanism requires no central control and involves only a few global signals. Thus, it allows for integrated, on-chip learning in large analog and optical networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised learning in noise

    Page(s): 44 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1340 KB)  

    A new hybrid learning law, the differential competitive law, which uses the neuronal signal velocity as a local unsupervised reinforcement mechanism, is introduced, and its coding and stability behavior in feedforward and feedback networks is examined. This analysis is facilitated by the recent Gluck-Parker pulse-coding interpretation of signal functions in differential Hebbian learning systems. The second-order behavior of RABAM (random adaptive bidirectional associative memory) Brownian-diffusion systems is summarized by the RABAM noise suppression theorem: the mean-squared activation and synaptic velocities decrease exponentially quickly to their lower bounds, the instantaneous noise variances driving the system. This result is extended to the RABAM annealing model, which provides a unified framework from which to analyze Geman-Hwang combinatorial optimization dynamical systems and continuous Boltzmann machine learning View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Three-dimensional neural net for learning visuomotor coordination of a robot arm

    Page(s): 131 - 136
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    An extension of T. Kohonen's (1982) self-organizing mapping algorithm together with an error-correction scheme based on the Widrow-Hoff learning rule is applied to develop a learning algorithm for the visuomotor coordination of a simulated robot arm. Learning occurs by a sequence of trial movements without the need for an external teacher. Using input signals from a pair of cameras, the closed robot arm system is able to reduce its positioning error to about 0.3% of the linear dimensions of its work space. This is achieved by choosing the connectivity of a three-dimensional lattice consisting of the units of the neural net View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification and control of dynamical systems using neural networks

    Page(s): 4 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1792 KB)  

    It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variants of self-organizing maps

    Page(s): 93 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    Self-organizing maps have a bearing on traditional vector quantization. A characteristic that makes them more closely resemble certain biological brain maps, however, is the spatial order of their responses, which is formed in the learning process. A discussion is presented of the basic algorithms and two innovations: dynamic weighting of the input signals at each input of each cell, which improves the ordering when very different input signals are used, and definition of neighborhoods in the learning algorithm by the minimal spanning tree, which provides a far better and faster approximation of prominently structured density functions. It is cautioned that if the maps are used for pattern recognition and decision process, it is necessary to fine tune the reference vectors so that they directly define the decision borders View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensitivity of feedforward neural networks to weight errors

    Page(s): 71 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (928 KB)  

    An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural controller for adaptive movements with unforeseen payloads

    Page(s): 137 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    A theory and computer simulation of a neural controller that learns to move and position a link carrying an unforeseen payload accurately are presented. The neural controller learns adaptive dynamic control from its own experience. It does not use information about link mass, link length, or direction of gravity, and it uses only indirect uncalibrated information about payload and actuator limits. Its average positioning accuracy across a large range of payloads after learning is 3% of the positioning range. This neural controller can be used as a basis for coordinating any number of sensory inputs with limbs of any number of joints. The feedforward nature of control allows parallel implementation in real time across multiple joints View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A parallel algorithm for tiling problems

    Page(s): 143 - 145
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    A parallel algorithm for tiling with polyominoes is presented. The tiling problem is to pack polyominoes in a finite checkerboard. The algorithm using l×m×n processing elements requires O(1) time, where l is the number of different kinds of polyominoes on an m×n checkerboard. The algorithm can be used for placement of components or cells in a very large-scale integrated circuit (VLSI) chip, designing and compacting printed circuit boards, and solving a variety of two- or three-dimensional packing problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic neural networks and the polynomial Adaline as complementary techniques for classification

    Page(s): 111 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB)  

    Two methods for classification based on the Bayes strategy and nonparametric estimators for probability density functions are reviewed. The two methods are named the probabilistic neural network (PNN) and the polynomial Adaline. Both methods involve one-pass learning algorithms that can be implemented directly in parallel neural network architectures. The performances of the two methods are compared with multipass backpropagation networks, and relative advantages and disadvantages are discussed. PNN and the polynomial Adaline are complementary techniques because they implement the same decision boundaries but have different advantages for applications. PNN is easy to use and is extremely fast for moderate-sized databases. For very large databases and for mature applications in which classification speed is more important than training speed, the polynomial equivalent can be found View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope