By Topic

IEEE Transactions on Neural Networks

Issue 2 • Date Mar 1992

Filter Results

Displaying Results 1 - 20 of 20
  • A machine learning method for generation of a neural network architecture: a continuous ID3 algorithm

    Publication Year: 1992, Page(s):280 - 291
    Cited by:  Papers (51)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (856 KB)

    The relation between the decision trees generated by a machine learning algorithm and the hidden layers of a neural network is described. A continuous ID3 algorithm is proposed that converts decision trees into hidden layers. The algorithm allows self-generation of a feedforward neural network architecture. In addition, it allows interpretation of the knowledge embedded in the generated connection... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum entropy signal reconstruction with neural networks

    Publication Year: 1992, Page(s):195 - 201
    Cited by:  Papers (16)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (516 KB)

    The implementation of the maximum entropy reconstruction algorithms by means of neural networks is discussed. It is shown that the solutions of the maximum entropy problem correspond to the steady states of the appropriate Hopfield net. The choice of network parameters is discussed, and existence of the maximum entropy solution is proved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural detector for seismic reflectivity sequences

    Publication Year: 1992, Page(s):338 - 340
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (224 KB)

    A commonly used routine in seismic signal processing is deconvolution, which comprises two operations: reflectivity detection and magnitude estimation. Existing statistical detectors are computationally expensive. In the paper, a Hopfield neural network is constructed to perform the reflectivity detection operation. The basic idea is to represent the reflectivity detection problem by an equivalent... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knapsack packing networks

    Publication Year: 1992, Page(s):302 - 307
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (400 KB)

    A knapsack packing neural network of 4n units with both low-order and conjunctive asymmetric synapses is derived from a non-Hamiltonian energy function. Parallel simulations of randomly generated problems of size n in {5, 10, 20} are used to compare network solutions with those of simple greedy fast parallel enumerative algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An evolution-oriented learning algorithm for the optimal interpolative net

    Publication Year: 1992, Page(s):315 - 323
    Cited by:  Papers (21)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (596 KB)

    An evolution-oriented learning algorithm is presented for the optimal interpolative (OI) artificial neural net proposed by R. J. P. deFigueiredo (1990). The algorithm is based on a recursive least squares training procedure. One of its key attributes is that it incorporates in the structure of the net the smallest number of prototypes from the training set T necessary to correctly classif... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rotation-invariant neural pattern recognition system with application to coin recognition

    Publication Year: 1992, Page(s):272 - 279
    Cited by:  Papers (67)  |  Patents (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1016 KB)

    In pattern recognition, it is often necessary to deal with problems to classify a transformed pattern. A neural pattern recognition system which is insensitive to rotation of input pattern by various degrees is proposed. The system consists of a fixed invariance network with many slabs and a trainable multilayered network. The system was used in a rotation-invariant coin recognition problem to dis... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A training algorithm for binary feedforward neural networks

    Publication Year: 1992, Page(s):176 - 194
    Cited by:  Papers (51)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1488 KB)

    The authors present a new training algorithm to be used on a four-layer perceptron-type feedforward neural network for the generation of binary-to-binary mappings. This algorithm is called the Boolean-like training algorithm (BLTA) and is derived from original principles of Boolean algebra followed by selected extensions. The algorithm can be implemented on analog hardware, using a four-layer bina... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of the effects of quantization in multilayer neural networks using a statistical model

    Publication Year: 1992, Page(s):334 - 338
    Cited by:  Papers (27)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (388 KB)

    A statistical quantization model is used to analyze the effects of quantization when digital techniques are used to implement a real-valued feedforward multilayer neural network. In this process, a parameter called the effective nonlinearity coefficient, which is important in the studying of quantization effects, is introduced. General statistical formulations of the performance degradation of the... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global optimization of a neural network-hidden Markov model hybrid

    Publication Year: 1992, Page(s):252 - 259
    Cited by:  Papers (73)  |  Patents (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (736 KB)

    The integration of multilayered and recurrent artificial neural networks (ANNs) with hidden Markov models (HMMs) is addressed. ANNs are suitable for approximating functions that compute new acoustic parameters, whereas HMMs have been proven successful at modeling the temporal structure of the speech signal. In the approach described, the ANN outputs constitute the sequence of observation vectors f... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative inversion of neural networks and its application to adaptive control

    Publication Year: 1992, Page(s):292 - 301
    Cited by:  Papers (38)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (816 KB)

    An iterative constrained inversion technique is used to find the control inputs to the plant. That is, rather than training a controller network and placing this network directly in the feedback or feedforward paths, the forward model of the plant is learned, and iterative inversion is performed on line to generate control commands. The control approach allows the controllers to respond online to ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using random weights to train multilayer networks of hard-limiting units

    Publication Year: 1992, Page(s):202 - 210
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (624 KB)

    A gradient descent algorithm suitable for training multilayer feedforward networks of processing units with hard-limiting output functions is presented. The conventional backpropagation algorithm cannot be applied in this case because the required derivatives are not available. However, if the network weights are random variables with smooth distribution functions, the probability of a hard-limiti... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A pulsed neural network capable of universal approximation

    Publication Year: 1992, Page(s):308 - 314
    Cited by:  Papers (5)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (544 KB)

    The authors describe a pulsed network version of the cerebellar model articulation controller (CMAC), popularized by Albus (1981). The network produces output pulses whose times of occurrence are a function of input pulse intervals. Within limits imposed by causality conditions, this function can approximate any bounded measurable function on a compact domain. Simulation results demonstrate the vi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast generic selection of features for neural network classifiers

    Publication Year: 1992, Page(s):324 - 328
    Cited by:  Papers (106)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (524 KB)

    The authors describe experiments using a genetic algorithm for feature selection in the context of neural network classifiers, specifically, counterpropagation networks. They present the novel techniques used in the application of genetic algorithms. First, the genetic algorithm is configured to use an approximate evaluation in order to reduce significantly the computation required. In particular,... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information geometry of Boltzmann machines

    Publication Year: 1992, Page(s):260 - 271
    Cited by:  Papers (78)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (916 KB)

    A Boltzmann machine is a network of stochastic neurons. The set of all the Boltzmann machines with a fixed topology forms a geometric manifold of high dimension, where modifiable synaptic weights of connections play the role of a coordinate system to specify networks. A learning trajectory, for example, is a curve in this manifold. It is important to study the geometry of the neural manifold, rath... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive fuzzy systems for backing up a truck-and-trailer

    Publication Year: 1992, Page(s):211 - 223
    Cited by:  Papers (165)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (968 KB)

    Fuzzy control systems and neural-network control systems for backing up a simulated truck, and truck-and-trailer, to a loading dock in a parking lot are presented. The supervised backpropagation learning algorithm trained the neural network systems. The robustness of the neural systems was tested by removing random subsets of training data in learning sequences. The neural systems performed well b... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Competitive learning with generalized winner-take-all activation

    Publication Year: 1992, Page(s):167 - 175
    Cited by:  Papers (13)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (676 KB)

    Competitive learning paradigms are usually defined with winner-take-all (WTA) activation rules. The paper develops a mathematical model for competitive learning paradigms using a generalization of the WTA activation rule (g-WTA). The model is a partial differential equation (PDE) relating the time rate of change in the `density' of weight vectors to the divergence of a vector field called the neur... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing multilayer perceptrons from nearest-neighbor systems

    Publication Year: 1992, Page(s):329 - 333
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (368 KB)

    Although multilayer perceptrons have been shown to be adept at providing good solutions to many problems, they have a major drawback in the very large amount of time needed for training (for example, on the order of CPU days for some of the author's experiments). The paper describes a method of producing a reasonable starting point by using a nearest-neighbor classifier. The method is further expa... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural network application for direct feedback controllers

    Publication Year: 1992, Page(s):224 - 231
    Cited by:  Papers (45)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (600 KB)

    The author presents a learning algorithm and capabilities of perceptron-like neural networks whose outputs and inputs are directly connected to plants just like ordinary feedback controllers. This simple configuration includes the difficulty of teaching the network. In addition, it is preferable to let the network learn so that a global and arbitrary evaluation of the total responses of the plant ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization for training neural nets

    Publication Year: 1992, Page(s):232 - 240
    Cited by:  Papers (63)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (796 KB)

    Various techniques of optimizing criterion functions to train neural-net classifiers are investigated. These techniques include three standard deterministic techniques (variable metric, conjugate gradient, and steepest descent), and a new stochastic technique. It is found that the stochastic technique is preferable on problems with large training sets and that the convergence rates of the variable... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Translation, rotation, and scale invariant pattern recognition by high-order neural networks and moment classifiers

    Publication Year: 1992, Page(s):241 - 251
    Cited by:  Papers (76)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (952 KB)

    The classification and recognition of two-dimensional patterns independently of their position, orientation, and size by using high-order networks are discussed. A method is introduced for reducing and controlling the number of weights of a third-order network used for invariant pattern recognition. The method leads to economical networks that exhibit high recognition rates for translated, rotated... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope