IEEE Transactions on Neural Networks

Volume 6 Issue 3 • May 1995

Filter Results

Displaying Results 1 - 25 of 31
  • Comments about "Analysis of the convergence properties of topology preserving neural networks"

    Publication Year: 1995, Page(s):797 - 799
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (259 KB)

    Shows that the main proofs of the above paper (Yu et al., Trans Neural Networks, vol. 4, no. 2, p. 207-220, 1993) are incomplete and not correct: in fact, the self-organization cannot be achieved if the adaptation parameter satisfies the classical Robins-Monro conditions and Proposition 2 is erroneous. On the other hand, the two-dimensional extension (Theorem 3) is not proved. The main point is th... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-line learning with minimal degradation in feedforward networks

    Publication Year: 1995, Page(s):657 - 668
    Cited by:  Papers (18)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1096 KB)

    Dealing with nonstationary processes requires quick adaptation while at the same time avoiding catastrophic forgetting. A neural learning technique that satisfies these requirements, without sacrificing the benefits of distributed representations, is presented. It relies on a formalization of the problem as the minimization of the error over the previously learned input-output patterns, subject to... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The high-order Boltzmann machine: learned distribution and topology

    Publication Year: 1995, Page(s):767 - 770
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (340 KB)

    In this paper we give a formal definition of the high-order Boltzmann machine (BM), and extend the well-known results on the convergence of the learning algorithm of the two-order BM. From the Bahadur-Lazarsfeld expansion we characterize the probability distribution learned by the high order BM. Likewise a criterion is given to establish the topology of the BM depending on the significant correlat... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling of component failure in neural networks for robustness evaluation: an application to object extraction

    Publication Year: 1995, Page(s):648 - 656
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (748 KB)

    The robustness of neural network (NN) based information processing systems with respect to component failure (damaging of nodes/links) is studied. The damaging/component failure process has been modeled as a Poisson process. To choose the instants or moments of damaging, statistical sampling technique is used. The nodes/links to be damaged are determined randomly. As an illustration, the model is ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust radar target classifier using artificial neural networks

    Publication Year: 1995, Page(s):760 - 766
    Cited by:  Papers (23)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (652 KB)

    In this paper an artificial neural network (ANN) based radar target classifier is presented, and its performance is compared with that of a conventional minimum distance classifier. Radar returns from realistic aircraft are synthesized using a thin wire time domain electromagnetic code. The time varying backscattered electric field from each target is processed using both a conventional scheme and... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ridge polynomial networks

    Publication Year: 1995, Page(s):610 - 622
    Cited by:  Papers (63)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1204 KB)

    This paper presents a polynomial connectionist network called ridge polynomial network (RPN) that can uniformly approximate any continuous function on a compact set in multidimensional input space R d, with arbitrary degree of accuracy. This network provides a more efficient and regular architecture compared to ordinary higher-order feedforward networks while maintaining their fast lear... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selection of learning parameters for CMAC-based adaptive critic learning

    Publication Year: 1995, Page(s):642 - 647
    Cited by:  Papers (17)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (496 KB)

    The CMAC-based adaptive critic learning structure consists of two CMAC modules: the action and the critic ones. Learning occurs in both modules. The critic module learns to evaluate the system status. It transforms the system response, usually some occasionally provided reinforcement signal, into organized useful information. Based on the knowledge developed in the critic module, the action module... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic design of layered neural networks based on their unified framework

    Publication Year: 1995, Page(s):691 - 702
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (932 KB)

    Proposes three ways of designing artificial neural networks based on a unified framework and uses them to develop new models. First, the authors show that artificial neural networks can be understood as probability density functions with parameters. Second, the authors propose three design methods for new models: a method for estimating the occurrence probability of the inputs, a method for estima... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving model accuracy using optimal linear combinations of trained neural networks

    Publication Year: 1995, Page(s):792 - 794
    Cited by:  Papers (79)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (300 KB)

    Neural network (NN) based modeling often requires trying multiple networks with different architectures and training parameters in order to achieve an acceptable model accuracy. Typically, only one of the trained networks is selected as “best” and the rest are discarded. The authors propose using optimal linear combinations (OLC's) of the corresponding outputs on a set of NN's as an al... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A biologically-inspired improved MAXNET

    Publication Year: 1995, Page(s):757 - 759
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (268 KB)

    A biologically-inspired modification to MAXNET is proposed. Unlike the original net where the weights are constant, the weights in the new net are dynamically changed. Consequently, the modified net achieves a drastic improvement in convergence rate. A simple hardware implementation for the modified net is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New nonleast-squares neural network learning algorithms for hypothesis testing

    Publication Year: 1995, Page(s):596 - 609
    Cited by:  Papers (19)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1480 KB)

    Hypothesis testing is a collective name for problems such as classification, detection, and pattern recognition. In this paper we propose two new classes of supervised learning algorithms for feedforward, binary-output neural network structures whose objective is hypothesis testing. All the algorithms are applications of stochastic approximation and are guaranteed to provide optimization with prob... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic learning rate optimization of the backpropagation algorithm

    Publication Year: 1995, Page(s):669 - 677
    Cited by:  Papers (84)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (800 KB)

    It has been observed by many authors that the backpropagation (BP) error surfaces usually consist of a large amount of flat regions as well as extremely steep regions. As such, the BP algorithm with a fixed learning rate will have low efficiency. This paper considers dynamic learning rate optimization of the BP algorithm using derivative information. An efficient method of deriving the first and s... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Programming based learning algorithms of neural networks with self-feedback connections

    Publication Year: 1995, Page(s):771 - 775
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (364 KB)

    Discusses the learning problem of neural networks with self-feedback connections and shows that when the neural network is used as associative memory, the learning problem can be transformed into some sort of programming (optimization) problem. Thus, the rather mature optimization technique in programming mathematics can be used for solving the learning problem of neural networks with self-feedbac... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter

    Publication Year: 1995, Page(s):529 - 538
    Cited by:  Papers (48)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (772 KB)

    The generalization performance of feedforward layered perceptrons can, in many cases, be improved either by smoothing the target via convolution, regularizing the training error with a smoothing constraint, decreasing the gain (i.e., slope) of the sigmoid nonlinearities, or adding noise (i.e., jitter) to the input training data, In certain important cases, the results of these procedures yield hig... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Holographic reduced representations

    Publication Year: 1995, Page(s):623 - 641
    Cited by:  Papers (95)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1756 KB)

    Associative memories are conventionally used to represent data with very simple structure: sets of pairs of vectors. This paper describes a method for representing more complex compositional structure in distributed representations. The method uses circular convolution to associate items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, simple fram... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural net robot controller with guaranteed tracking performance

    Publication Year: 1995, Page(s):703 - 715
    Cited by:  Papers (372)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1064 KB)

    A neural net (NN) controller for a general serial-link robot arm is developed. The NN has two layers so that linearity in the parameters holds, but the “net functional reconstruction error” and robot disturbance input are taken as nonzero. The structure of the NN controller is derived using a filtered error/passivity approach, leading to new NN passivity properties. Online weight tunin... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A constructive algorithm for binary neural networks: the oil-spot algorithm

    Publication Year: 1995, Page(s):794 - 797
    Cited by:  Papers (21)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (412 KB)

    This paper presents a constructive training algorithm for supervised neural networks. The algorithm relies on a topological approach, based on the representation of the mapping of interest onto the binary hypercube of the input space. It dynamically constructs a two-layer neural network by involving successively binary examples. A convenient treatment of real-valued data is possible by means of a ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On sequential construction of binary neural networks

    Publication Year: 1995, Page(s):678 - 690
    Cited by:  Papers (17)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1012 KB)

    A new technique called sequential window learning (SWL), for the construction of two-layer perceptrons with binary inputs is presented. It generates the number of hidden neurons together with the correct values for the weights, starting from any binary training set. The introduction of a new type of neuron, having a window-shaped activation function, considerably increases the convergence speed an... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • K-winners-take-all circuit with O(N) complexity

    Publication Year: 1995, Page(s):776 - 778
    Cited by:  Papers (62)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (220 KB)

    Presents a k-winners-take-all circuit that is an extension of the winner-take-all circuit by Lazzaro et al. (1989). The problem of selecting the largest k numbers is formulated as a mathematical programming problem whose solution scheme, based on the Lagrange multiplier method, is directly implemented on an analog circuit. The wire length in this circuit grows only linearly with the number of elem... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distortion tolerant pattern recognition based on self-organizing feature extraction

    Publication Year: 1995, Page(s):539 - 547
    Cited by:  Papers (52)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (932 KB)

    A generic, modular, neural network-based feature extraction and pattern classification system is proposed for finding essentially two-dimensional objects or object parts from digital images in a distortion tolerant manner, The distortion tolerance is built up gradually by successive blocks in a pipeline architecture. The system consists of only feedforward neural networks, allowing efficient paral... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the problem of correspondence in range data and some inelastic uses for elastic nets

    Publication Year: 1995, Page(s):716 - 723
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (700 KB)

    In this work, the authors propose a novel method to obtain correspondence between range data across image frames using neural like mechanisms. The method is computationally efficient and tolerant of noise and missing points. Elastic nets, which evolved out of research into mechanisms to establish ordered neural projections between structures of similar geometry, are used to cast correspondence as ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A recurrent Newton algorithm and its convergence properties

    Publication Year: 1995, Page(s):779 - 782
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (444 KB)

    In this paper a recurrent Newton algorithm for an important class of recurrent neural networks is introduced. It is noted that a suitable constraint must be imposed on recurrent variables to ensure proper convergence behavior. The simulation results show that the proposed Newton algorithm with the suggested constraint performs uniformly better than the backpropagation algorithm and the Newton algo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A nonlinear projection method based on Kohonen's topology preserving maps

    Publication Year: 1995, Page(s):548 - 559
    Cited by:  Papers (108)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1368 KB)

    A nonlinear projection method is presented to visualize high-dimensional data as a 2D image. The proposed method is based on the topology preserving mapping algorithm of Kohonen. The topology preserving mapping algorithm is used to train a 2D network structure. Then the interpoint distances in the feature space between the units in the network are graphically displayed to show the underlying struc... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximating maximum clique with a Hopfield network

    Publication Year: 1995, Page(s):724 - 735
    Cited by:  Papers (43)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1164 KB)

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximate reconstruction of PET data with a self-organizing neural network

    Publication Year: 1995, Page(s):783 - 789
    Cited by:  Papers (3)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (420 KB)

    Self-organization was observed using the algorithm of Kohonen with an original “distance” adapted to stimuli resulting from coincident detections of electron-positron annihilation photon pairs. This has led to a method for approximate reconstruction of two-dimensional positron emission tomography (2-D PET) images that is totally independent of the number of detectors. To obtain meaning... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope