By Topic

Neural Networks, IEEE Transactions on

Issue 6 • Date Nov 1993

Filter Results

Displaying Results 1 - 13 of 13
  • An improved algorithm for neural network classification of imbalanced training sets

    Publication Year: 1993 , Page(s): 962 - 969
    Cited by:  Papers (27)  |  Patents (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB)  

    The backpropagation algorithm converges very slowly for two-class problems in which most of the exemplars belong to one dominant class. An analysis shows that this occurs because the computed net error gradient vector is dominated by the bigger class so much that the net error for the exemplars in the smaller class increases significantly in the initial iteration. The subsequent rate of convergence of the net error is very low. A modified technique for calculating a direction in weight-space which decreases the error for each class is presented. Using this algorithm, the rate of learning for two-class classification problems is accelerated by an order of magnitude View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximations of continuous functionals by neural networks with application to dynamic systems

    Publication Year: 1993 , Page(s): 910 - 918
    Cited by:  Papers (54)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    The paper gives several strong results on neural network representation in an explicit form. Under very mild conditions a functional defined on a compact set in C[a, b] or Lp[a, b], spaces of infinite dimensions, can be approximated arbitrarily well by a neural network with one hidden layer. The results are a significant development beyond earlier work, where theorems of approximating continuous functions defined on a finite-dimensional real space by neural networks with one hidden layer were given. All the results are shown to be applicable to the approximation of the output of dynamic systems at any particular time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A combinatorial approach to understanding perceptron capabilities

    Publication Year: 1993 , Page(s): 989 - 992
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    This work investigates the classification capabilities of perceptrons which incorporate a single hidden layer of nodes from a theoretical viewpoint. In particular, the question of determining whether a given set can be realized as the decision region of such a network is considered. The main theoretic result demonstrates that the realizability of a set can be determined by restricting attention to any neighborhood of its boundary. This result is then used to identify general classes of realizable sets, and an example is given which shows that even though the realizability of a set might be readily discerned, the construction of an appropriate perceptron architecture may be complicated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of a pipelined backpropagation parallel algorithm

    Publication Year: 1993 , Page(s): 970 - 981
    Cited by:  Papers (8)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1128 KB)  

    The supervised training of feedforward neural networks is often based on the error backpropagation algorithm. The authors consider the successive layers of a feedforward neural network as the stages of a pipeline which is used to improve the efficiency of the parallel algorithm. A simple placement rule is used to take advantage of simultaneous executions of the calculations on each layer of the network. The analytic expressions show that the parallelization is efficient. Moreover, they indicate that the performance of this implementation is almost independent of the neural network architecture. Their simplicity assures easy prediction of learning performance on a parallel machine for any neural network architecture. The experimental results are in agreement with analytical estimates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Continuous speech recognition by connectionist statistical methods

    Publication Year: 1993 , Page(s): 893 - 909
    Cited by:  Papers (19)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1752 KB)  

    Over the period of 1987-1991, a series of theoretical and experimental results have suggested that multilayer perceptrons (MLP) are an effective family of algorithms for the smooth estimation of high-dimension probability density functions that are useful in continuous speech recognition. The early form of this work has focused on hidden Markov models (HMM) that are independent of phonetic context. More recently, the theory has been extended to context-dependent models. The authors review the basic principles of their hybrid HMM/MLP approach and describe a series of improvements that are analogous to the system modifications instituted for the leading conventional HMM systems over the last few years. Some of these methods directly trade off computational complexity for reduced requirements of memory and memory bandwidth. Results are presented on the widely used Resource Management speech database that has been distributed by the US National Institute of Standards and Technology View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification and decentralized adaptive control using dynamical neural networks with application to robotic manipulators

    Publication Year: 1993 , Page(s): 919 - 930
    Cited by:  Papers (44)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (860 KB)  

    Efficient implementation of a neural network-based strategy for the online adaptive control of complex dynamical systems characterized by an interconnection of several subsystems (possibly nonlinear) centers on the rapidity of the convergence of the training scheme used for learning the system dynamics. For illustration, in order to achieve a satisfactory control of a multijointed robotic manipulator during the execution of high speed trajectory tracking tasks, the highly nonlinear and coupled dynamics together with the variations in the parameters necessitate a fast updating of the control actions. For facilitating this requirement, a multilayer neural network structure that includes dynamical nodes in the hidden layer is proposed, and a supervised learning scheme that employs a simple distributed updating rule is used for the online identification and decentralized adaptive control. Important characteristic features of the resulting control scheme are discussed and a quantitative evaluation of its performance in the above illustrative example is given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A perceptron network for functional identification and control of nonlinear systems

    Publication Year: 1993 , Page(s): 982 - 988
    Cited by:  Papers (100)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB)  

    Tracking control of a general class of nonlinear systems using a perceptron neural network (PNN) is presented. The basic structure of the PNN and its training law are first derived. A novel discrete-time control strategy is introduced that employs the PNN for direct online estimation of the required feedforward control input. The developed controller can be applied to both discrete- and continuous-time plants. Unlike most of the existing direct adaptive or learning schemes, the nonlinear plant is not assumed to be feedback linearizable. The stability of the neural controller under ideal conditions and its robust stability to inexact modeling information are rigorously analyzed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of Hopfield network to saccades

    Publication Year: 1993 , Page(s): 995 - 997
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    Human eye movement mechanisms (saccades) are very useful for scene analysis, including object representation and pattern recognition. A Hopfield neural network for emulating saccades is proposed. The network uses an energy function that includes location and identification tasks. Computer simulation shows that the network performs those tasks cooperatively. The result suggests that the network is applicable to shift-invariant pattern recognition View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Empirical results of using back-propagation neural networks to separate single echoes from multiple echoes

    Publication Year: 1993 , Page(s): 993 - 995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    Empirical results illustrate the pitfalls of applying an artificial neural network (ANN) to classification of underwater active sonar returns. During training, a back-propagation ANN classifier learns to recognize two classes of reflected active sonar waveforms: waveforms having two major sonar echoes or peaks and those having one major echo or peak. It is shown how the classifier learns to distinguish between the two classes. Testing the ANN classifier with different waveforms of each type generated unexpected results: the number of echo peaks was nor the feature used to separate classes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Silicon models of lateral inhibition

    Publication Year: 1993 , Page(s): 955 - 961
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB)  

    The neurological process known as lateral inhibition (LI) has long been acknowledged as a critical operation for the preprocessing many types of sensory stimuli. In the mammalian retina, LI is utilized to enhance visual images by performing differential amplification on the pixels from which the image is composed. In this study, LI is implemented using VLSI-based models. These models consist of small two-dimensional arrays of generalized sensory pixels, each of which inhibits, and in turn is inhibited by, each of its immediate neighbors. Two custom CMOS array prototypes circuits have been designed, fabricated, and characterized. Test results indicate that both circuits are able to impart contrast to arbitrary two-dimensional geometric images in a flexible yet stable manner, and do so immediately and simultaneously. These arrays thus offer a level of performance not attainable by software methods, making this method well suited for machine vision systems that utilize parallel architectures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural networks for shortest path computation and routing in computer networks

    Publication Year: 1993 , Page(s): 941 - 954
    Cited by:  Papers (61)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1080 KB)  

    The application of neural networks to the optimum routing problem in packet-switched computer networks, where the goal is to minimize the network-wide average time delay, is addressed. Under appropriate assumptions, the optimum routing algorithm relies heavily on shortest path computations that have to be carried out in real time. For this purpose an efficient neural network shortest path algorithm that is an improved version of previously suggested Hopfield models is proposed. The general principles involved in the design of the proposed neural network are discussed in detail. Its computational power is demonstrated through computer simulations. One of the main features of the proposed model is that it will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Training a network with ternary weights using the CHIR algorithm

    Publication Year: 1993 , Page(s): 997 - 1000
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    A modification of the binary weight CHIR algorithm is presented, whereby a zero state is added to the possible binary weight states. This method allows solutions with reduced connectivity to be obtained, by offering disconnections in addition to the excitatory and inhibitory connections. The algorithm has been examined via extensive computer simulations for the restricted cases of parity, symmetry, and teacher problems, which show convergence rates similar to those presented for the binary CHIR2 algorithm, but with reduced connectivity. Moreover, this method expands the set of problems solvable via the binary weight network configuration with no additional parameter requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On solving constrained optimization problems with neural networks: a penalty method approach

    Publication Year: 1993 , Page(s): 931 - 940
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (772 KB)  

    Deals with the use of neural networks to solve linear and nonlinear programming problems. The dynamics of these networks are analyzed. In particular, the dynamics of the canonical nonlinear programming circuit are analyzed. The circuit is shown to be a gradient system that seeks to minimize an unconstrained energy function that can be viewed as a penalty method approximation of the original problem. Next, the implementations that correspond to the dynamical canonical nonlinear programming circuit are examined. It is shown that the energy function that the system seeks to minimize is different than that of the canonical circuit, due to the saturation limits of op-amps in the circuit. It is also noted that this difference can cause the circuit to converge to a different state than the dynamical canonical circuit. To remedy this problem, a new circuit implementation is proposed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope