By Topic

IEEE Transactions on Neural Networks

Issue 5 • Date Sept. 1994

Filter Results

Displaying Results 1 - 19 of 19
  • Comments on "Can backpropagation error surface not have local minima?"

    Publication Year: 1994, Page(s):844 - 845
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (224 KB)

    in the above paper Yu (IEEE Trans. Neural Networks, vol.3, no.6, p.1019-21 (1992)) claims to prove that local minima do not exist in the error surface of backpropagation networks being trained on data with t distinct input patterns when the network is capable of exactly representing arbitrary mappings on t input patterns. The commenter points out that the proof presented is flawed, so that the res... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synthetic approach to optimal filtering

    Publication Year: 1994, Page(s):803 - 811
    Cited by:  Papers (43)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (636 KB)

    As opposed to the analytic approach used in the modern theory of optimal filtering, a synthetic approach is presented. The signal/sensor data, which are generated by either computer simulation or actual experiments, are synthesized into a filter by training a recurrent multilayer perceptron (RMLP) with at least one hidden layer of fully or partially interconnected neurons and with or without outpu... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing higher-order networks with empirically selected units

    Publication Year: 1994, Page(s):698 - 711
    Cited by:  Papers (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1392 KB)

    Introduces a class of simple polynomial neural network classifiers, called mask perceptrons. A series of algorithms for practical development of such structures is outlined. It relies on ordering of input attributes with respect to their potential usefulness and heuristic driven generation and selection of hidden units (monomial terms) in order to combat the exponential explosion in the number of ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synthesis of Brain-State-in-a-Box (BSB) based associative memories

    Publication Year: 1994, Page(s):730 - 737
    Cited by:  Papers (37)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (640 KB)

    Presents a novel synthesis procedure to realize an associative memory using the Generalized-Brain-State-in-a-Box (GBSB) neural model. The implementation yields an interconnection structure that guarantees that the desired memory patterns are stored as asymptotically stable equilibrium points and that possesses very few spurious states. Furthermore, the interconnection structure is in general non-s... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training

    Publication Year: 1994, Page(s):792 - 802
    Cited by:  Papers (78)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1028 KB)

    We analyze the effects of analog noise on the synaptic arithmetic during multilayer perceptron training, by expanding the cost function to include noise-mediated terms. Predictions are made in the light of these calculations that suggest that fault tolerance, training quality and training trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct classi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pruning recurrent neural networks for improved generalization performance

    Publication Year: 1994, Page(s):848 - 851
    Cited by:  Papers (57)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (364 KB)

    Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illus... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multilayer associative neural networks (MANN's): storage capacity versus perfect recall

    Publication Year: 1994, Page(s):812 - 822
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (804 KB)

    The objective of this paper is to to resolve important issues in artificial neural nets-exact recall and capacity in multilayer associative memories. These problems have imposed restrictions on coding strategies. We propose the following triple-layered hybrid neural network: the first synapse is a one-shot associative memory using the modified Kohonen's adaptive learning algorithm with arbitrary i... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global analysis of Oja's flow for neural networks

    Publication Year: 1994, Page(s):674 - 683
    Cited by:  Papers (48)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (760 KB)

    A detailed study of Oja's learning equation in neural networks is undertaken in this paper. Not only are such fundamental issues as existence, uniqueness, and representation of solutions completely resolved, but also the convergence issue is resolved. It is shown that the solution of Oja's equation is exponentially convergent to an equilibrium from any initial value. Moreover, the necessary and su... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection and classification of underwater acoustic transients using neural networks

    Publication Year: 1994, Page(s):712 - 718
    Cited by:  Papers (16)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (688 KB)

    Underwater acoustic transients can develop from a wide variety of sources. Accordingly, detection and classification of such transients by automated means can be exceedingly difficult. This paper describes a new approach to this problem based on adaptive pattern recognition employing neural networks and an alternative metric, the Hausdorff metric. The system uses self-organization to both generali... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the initialization and optimization of multilayer perceptrons

    Publication Year: 1994, Page(s):738 - 751
    Cited by:  Papers (31)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1224 KB)

    Multilayer perceptrons are now widely used for pattern recognition, although the training remains a time consuming procedure often converging toward a local optimum. Moreover, as the optimum network size and topology are usually unknown, the search of this optimum requires a lot of networks to be trained. In this paper the authors propose a method for properly initializing the parameters (weights)... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical intelligent control for robotic motion

    Publication Year: 1994, Page(s):823 - 832
    Cited by:  Papers (26)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (844 KB)

    This paper presents a new scheme for intelligent control of robotic manipulators. This scheme is a hierarchically integrated approach to neuromorphic and symbolic control of robotic manipulators. This includes an applied neural network for servo control and knowledge-based approximation. The neural network in the servo control level is based on a numerical manipulation, while the knowledge based p... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multilayer neural networks for reduced-rank approximation

    Publication Year: 1994, Page(s):684 - 697
    Cited by:  Papers (13)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1108 KB)

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exponential stability and oscillation of Hopfield graded response neural network

    Publication Year: 1994, Page(s):719 - 729
    Cited by:  Papers (94)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (952 KB)

    Both exponential and stochastic stabilities of the Hopfield neural network are analyzed. The results are especially useful for analyzing the stabilities of asymmetric neural networks. A constraint on the connection matrix has been found under which the neural network has a unique and exponentially stable equilibrium. Given any connection matrix, this constraint can be satisfied through the adjustm... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weight smoothing to improve network generalization

    Publication Year: 1994, Page(s):752 - 763
    Cited by:  Papers (21)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1076 KB)

    A weight smoothing algorithm is proposed in this paper to improve a neural network's generalization capability. The algorithm can be used when the data patterns to be classified are presented on an n-dimensional grid (n⩾1) and there exists some correlations among neighboring data points within a pattern. For a fully-interconnected feedforward net, no such correlation information is embedded in... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural network applications for jamming state information generator

    Publication Year: 1994, Page(s):833 - 837
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (484 KB)

    A known jamming state information (JSI) scheme for a coded frequency-hopped M-ary frequency-shift-keying (FH/MFSK) system under partial-band noise jamming, plus additive white Gaussian noise, utilizes the maximum a posteriori (MAP) rule based on the total energy received in the M-tone signaling bands. It is assumed that the knowledge of partial-band noise jamming fraction is available to the JSI g... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum likelihood training of probabilistic neural networks

    Publication Year: 1994, Page(s):764 - 783
    Cited by:  Papers (91)  |  Patents (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1504 KB)

    A maximum likelihood method is presented for training probabilistic neural networks (PNN's) using a Gaussian kernel, or Parzen window. The proposed training algorithm enables general nonlinear discrimination and is a generalization of Fisher's method for linear discrimination. Important features of maximum likelihood training for PNN's are: 1) it economizes the well known Parzen window estimator w... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new winners-take-all architecture in artificial neural networks

    Publication Year: 1994, Page(s):838 - 843
    Cited by:  Papers (18)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (480 KB)

    MAXNET is a common competitive architecture to select the maximum or minimum from a set of data. However, there are two major problems with the MAXNET. The first problem is its slow convergence rate if all the data have nearly the same value. The second one is that it fails when either nonunique extreme values exist or each initial value is smaller than or equal to the sum of initial inhibitions f... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network learning algorithm tailored for VLSI implementation

    Publication Year: 1994, Page(s):784 - 791
    Cited by:  Papers (35)  |  Patents (13)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (748 KB)

    This paper describes concepts that optimize an on-chip learning algorithm for implementation of VLSI neural networks with conventional technologies. The network considered comprises an analog feedforward network with digital weights and update circuitry, although many of the concepts are also valid for analog weights. A general, semi-parallel form of perturbation learning is used to accelerate hid... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The best approximation to C2 functions and its error bounds using regular-center Gaussian networks

    Publication Year: 1994, Page(s):845 - 847
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (252 KB)

    Gaussian neural networks are considered to approximate any C2 function with support on the unit hypercube Im=[0, 1] m in the sense of best approximation. An upper bound (O(N-2)) of the approximation error is obtained in the present paper for a Gaussian network having Nm hidden neurons with centers defined on a regular mesh in Im View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope