By Topic

Neural Networks, IEEE Transactions on

Issue 1 • Date Jan 1992

Filter Results

Displaying Results 1 - 17 of 17
  • Hopfield network for stereo vision correspondence

    Page(s): 5 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB)  

    An optimization approach is used to solve the correspondence problem for a set of features extracted from a pair of stereo images. A cost function is defined to represent the constraints on the solution, which is then mapped onto a two-dimensional Hopfield neural network for minimization. Each neuron in the network represents a possible match between a feature in the left image and one in the right image. Correspondence is achieved by initializing (exciting) each neuron that represents a possible match and then allowing the network to settle down into a stable state. The network uses the initial inputs and the compatibility measures between the matched points to find a stable state View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensitivity analysis of multilayer perceptron with differentiable activation functions

    Page(s): 101 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB)  

    In a neural network, many different sets of connection weights can approximately realize an input-output mapping. The sensitivity of the neural network varies depending on the set of weights. For the selection of weights with lower sensitivity or for estimating output perturbations in the implementation, it is important to measure the sensitivity for the weights. A sensitivity depending on the weight set in a single-output multilayer perceptron (MLP) with differentiable activation functions is proposed. Formulas are derived to compute the sensitivity arising from additive/multiplicative weight perturbations or input perturbations for a specific input pattern. The concept of sensitivity is extended so that it can be applied to any input patterns. A few sensitivity measures for the multiple output MLP are suggested. For the verification of the validity of the proposed sensitivities, computer simulations have been performed, resulting in good agreement between theoretical and simulation outcomes for small weight perturbations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A parallel improvement algorithm for the bipartite subgraph problem

    Page(s): 139 - 145
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB)  

    The authors propose the first parallel improvement algorithm using the maximum neural network model for the bipartite subgraph problem. The goal of this NP-complete problem is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. A large number of instances have been simulated to verify the proposed algorithm, with the simulation result showing that the algorithm finds a solution within 200 iteration steps and the solution quality is superior to that of the best existing algorithm. The algorithm is extended for the K-partite subgraph problem where no algorithm has been proposed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchically structured unit-simplex transformations for parallel distributed optimization problems

    Page(s): 108 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    A stable deterministic approach is presented for incorporating unit-simplex constraints based on a hierarchical deformable-template structure. This approach (i) guarantees strict confinement of the search to the unit-simplex constraint set without introducing unwanted constraints; (ii) leads to a hierarchical, rather than a global, network interconnection structure; (iii) allows multiresolution processing; and (iv) allows easy closed-form incorporation of certain other inherently global constraints, such as general recursive symmetries. Selected examples are presented which illustrate and demonstrate large-scale application of the template method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Ho-Kashyap rules for perceptron training

    Page(s): 51 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (812 KB)  

    Three adaptive versions of the Ho-Kashyap perceptron training algorithm are derived based on gradient descent strategies. These adaptive Ho-Kashyap (AHK) training rules are comparable in their complexity to the LMS and perceptron training rules and are capable of adaptively forming linear discriminant surfaces that guarantee linear separability and of positioning such surfaces for maximal classification robustness. In particular, a derived version called AHK II is capable of adaptively identifying critical input vectors lying close to class boundaries in linearly separable problems. The authors extend this algorithm as AHK III, which adds the capability of fast convergence to linear discriminant surfaces which are good approximations for nonlinearly separable problems. This is achieved by a simple built-in unsupervised strategy which allows for the adaptive grading and discarding of input vectors causing nonseparability. Performance comparisons with LMS and perceptron training are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks

    Page(s): 154 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called `weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning and convergence analysis of neural-type structured networks

    Page(s): 39 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB)  

    A class of feedforward neural networks, structured networks, has recently been introduced as a method for solving matrix algebra problems in an inherently parallel formulation. A convergence analysis for the training of structured networks is presented. Since the learning techniques used in structured networks are also employed in the training of neural networks, the issue of convergence is discussed not only from a numerical algebra perspective but also as a means of deriving insight into connectionist learning. Bounds on the learning rate are developed under which exponential convergence of the weights to their correct values is proved for a class of matrix algebra problems that includes linear equation solving, matrix inversion, and Lyapunov equation solving. For a special class of problems, the orthogonalized back-propagation algorithm, an optimal recursive update law for minimizing a least-squares cost functional, is introduced. It guarantees exact convergence in one epoch. Several learning issues are investigated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of four neural net learning methods for dynamic system identification

    Page(s): 122 - 130
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    Four types of neural net learning rules are discussed for dynamic system identification. It is shown that the feedforward network (FFN) pattern learning rule is a first-order approximation of the FFN-batch learning rule. As a result, pattern learning is valid for nonlinear activation networks provided the learning rate is small. For recurrent types of networks (RecNs), RecN-pattern learning is different from RecN-batch learning. However, the difference can be controlled by using small learning rates. While RecN-batch learning is strict in a mathematical sense, RecN-pattern learning is simple to implement and can be implemented in a real-time manner. Simulation results agree very well with the theorems derived. It is shown by simulation that for system identification problems, recurrent networks are less sensitive to noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting the number of contacts and dimensions of full-custom integrated circuit blocks using neural network techniques

    Page(s): 146 - 153
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    Block layout dimension prediction is an important activity in many very large scale integration computer-aided design tasks, among them structural synthesis, floor planning and physical synthesis. Block layout dimension prediction is harder than block area prediction and has been previously considered to be intractable. The authors present a solution to this problem using a neural network machine learning approach. The method uses a neural network to predict first the number of contacts; then another neural network uses this prediction and other circuit features to predict the width and the height of its layout. The approach has produced much better results than those published-a dimension (aspect ratio) prediction average error of less than 18% with a corresponding area prediction average error of less than 15%. Furthermore, the technique predicts the number of contacts in a circuit with less than 4% error on average View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network regulator for turbogenerators

    Page(s): 95 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    A neural network (NN) based regulator for nonlinear, multivariable turbogenerator control is presented. A hierarchical architecture of an NN is proposed for regulator design, consisting of two subnetworks which are used for input-output (I-O) mapping and control, respectively, based on the back-propagation (BP) algorithm. The regulator has the flexibility for accepting more sensory information to cater to multi-input, multioutput systems. Its operation does not require a reference model or inverse system model and it can produce more acceptable control signals than are obtained by using sign of plant errors during training I-O mapping of turbogenerator systems using NNs has been investigated and the regulator has been implemented on a complex turbogenerator system model. Simulation results show satisfactory control performance and illustrate the potential of the NN regulator in comparison with an existing adaptive controller View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Back-propagation learning in expert networks

    Page(s): 62 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (948 KB)  

    Expert networks are event-driven, acyclic networks of neural objects derived from expert systems. The neural objects process information through a nonlinear combining function that is different from, and more complex than, typical neural network node processors. The authors develop back-propagation learning for acyclic, event-driven networks in general and derive a specific algorithm for learning in EMYCIN-derived expert networks. The algorithm combines back-propagation learning with other features of expert networks, including calculation of gradients of the nonlinear combining functions and the hypercube nature of the knowledge space. It offers automation of the knowledge acquisition task for certainty factors, often the most difficult part of knowledge extraction. Results of testing the learning algorithm with a medium-scale (97-node) expert network are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mean field annealing: a formalism for constructing GNC-like algorithms

    Page(s): 131 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    Optimization problems are approached using mean field annealing (MFA), which is a deterministic approximation, using mean field theory and based on Peierls's inequality, to simulated annealing. The MFA mathematics are applied to three different objective function examples. In each case, MFA produces a minimization algorithm that is a type of graduated nonconvexity. When applied to the `weak-membrane' objective, MFA results in an algorithm qualitatively identical to the published GNC algorithm. One of the examples, MFA applied to a piecewise-constant objective function, is then compared experimentally with the corresponding GNC weak-membrane algorithm. The mathematics of MFA are shown to provide a powerful and general tool for deriving optimization algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamical analysis of the brain-state-in-a-box (BSB) neural models

    Page(s): 86 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    A stability analysis is performed for the brain-state-in-a-box (BSB) neural models with weight matrices that need not be symmetric. The implementation of associative memories using the analyzed class of neural models is also addressed. In particular, the authors modify the BSB model so that they can better control the extent of the domains of attraction of stored patterns. Generalizations of the results obtained for the BSB models to a class of cellular neural networks are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximally fault tolerant neural networks

    Page(s): 14 - 23
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (892 KB)  

    An application of neural network modeling is described for generating hypotheses about the relationships between response properties of neurons and information processing in the auditory system. The goal is to study response properties that are useful for extracting sound localization information from directionally selective spectral filtering provided by the pinna. For studying sound localization based on spectral cues provided by the pinna, a feedforward neural network model with a guaranteed level of fault tolerance is introduced. Fault tolerance and uniform fault tolerance in a neural network are formally defined and a method is described to ensure that the estimated network exhibits fault tolerance. The problem of estimating weights for such a network is formulated as a large-scale nonlinear optimization problem. Numerical experiments indicate that solutions with uniform fault tolerance exist for the pattern recognition problem considered. Solutions derived by introducing fault tolerance constraints have better generalization properties than solutions obtained via unconstrained back-propagation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using additive noise in back-propagation training

    Page(s): 24 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1176 KB)  

    The possibility of improving the generalization capability of a neural network by introducing additive noise to the training samples is discussed. The network considered is a feedforward layered neural network trained with the back-propagation algorithm. Back-propagation training is viewed as nonlinear least-squares regression and the additive noise is interpreted as generating a kernel estimate of the probability density that describes the training vector distribution. Two specific application types are considered: pattern classifier networks and estimation of a nonstochastic mapping from data corrupted by measurement errors. It is not proved that the introduction of additive noise to the training vectors always improves network generalization. However, the analysis suggests mathematically justified rules for choosing the characteristics of noise if additive noise is used in training. Results of mathematical statistics are used to establish various asymptotic consistency results for the proposed method. Numerical simulations support the applicability of the training method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The digi-neocognitron: a digital neocognitron neural network model for VLSI

    Page(s): 73 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (964 KB)  

    One of the most complicated ANN models, the neocognitron (NC), is adapted to an efficient all-digital implementation for VLSI. The new model, the digi-neocognitron (DNC), has the same pattern recognition performance as the NC. The DNC model is derived from the NC model by a combination of preprocessing approximation and the definition of new model functions, e.g., multiplication and division are eliminated by conversion of factors to powers of 2, requiring only shift operations. The NC model is reviewed, the DNC model is presented, a methodology to convert NC models to DNC models is discussed, and the performances of the two models are compared on a character recognition example. The DNC model has substantial advantages over the NC model for VLSI implementation. The area-delay product is improved by two to three orders of magnitude, and I/O and memory requirements are reduced by representation of weights with 3 bits or less and neuron outputs with 4 bits or 7 bits View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning convergence in the cerebellar model articulation controller

    Page(s): 115 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    A new way to look at the learning algorithm in the cerebellar model articulation controller (CMAC) proposed by J.S. Albus (1975) is presented. A proof that the CMAC learning always converges with arbitrary accuracy on any set of training data is obtained. An alternative way to implement CMAC based on the insights obtained in the process is proposed. The scheme is tested with a computer simulation for learning the inverse dynamics of a two-link robot arm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope