By Topic

IEEE Transactions on Neural Networks

Issue 6 • Date Nov. 1992

Filter Results

Displaying Results 1 - 21 of 21
  • Comments on "Bayes statistical behavior and valid generalization of pattern classifying neural networks" [with reply]

    Publication Year: 1992, Page(s):1026 - 1027
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (170 KB)

    In the above-titled paper (ibid., vol.2, p.471-475, July 1991), the authors claim that neural network classifiers duplicate the decision rule created by the empirical Bayes rule. The commenter states that this statement is, in fact, not generally true and points out an error in the proof. The commenter also shows that the related true statement about the relation between neural and Bayes classifie... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comments on "Dynamic programming approach to optimal weight selection in multilayer neural networks" [with reply]

    Publication Year: 1992, Page(s):1028 - 1029
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (95 KB)

    The commenter claims that in the above-titled paper (ibid., vol.2, p.465-467, July 1991), which presents an efficient algorithm using dynamic programming to find weights which load a set of examples into a feedforward neural network with minimal error, a contradiction lies buried in the paper's notation. In reply, the author maintains that the comments are due to some misunderstandings about the i... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural subnet design by direct polynomial mapping

    Publication Year: 1992, Page(s):1024 - 1026
    Cited by:  Papers (4)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (272 KB)

    In a recent paper by M. Chen and M. Maury (1990), it was shown that multilayer perceptron neural networks can be used to form products of any number of inputs, thereby constructively proving universal approximation. This result is extended, and a method for the analysis and synthesis of single-input, single-output neural subnetworks is described. Given training samples of a function to be approxim... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Handwritten digit recognition by neural networks with single-layer training

    Publication Year: 1992, Page(s):962 - 968
    Cited by:  Papers (35)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (672 KB)

    It is shown that neural network classifiers with single-layer training can be applied efficiently to complex real-world classification problems such as the recognition of handwritten digits. The STEPNET procedure, which decomposes the problem into simpler subproblems which can be solved by linear separators, is introduced. Provided appropriate data representations and learning rules are used, perf... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feedback stabilization using two-hidden-layer nets

    Publication Year: 1992, Page(s):981 - 990
    Cited by:  Papers (52)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (880 KB)

    The representational capabilities of one-hidden-layer and two-hidden-layer nets consisting of feedforward interconnections of linear threshold units are compared. It is remarked that for certain problems two hidden layers are required, contrary to what might be in principle expected from the known approximation theorems. The differences are not based on numerical accuracy or number of units needed... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital neural emulators using tree accumulation and communication structures

    Publication Year: 1992, Page(s):934 - 950
    Cited by:  Papers (7)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1328 KB)

    Three digital artificial neural network processors suitable for the emulation of fully interconnected neural networks are proposed. The processors use N2 multipliers and an arrangement of tree structures that provide the communication and accumulation function either individually or in a combined manner using communicating adder trees. The performance for the emulation of an ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characteristics of Hebbian-type associative memories having faulty interconnections

    Publication Year: 1992, Page(s):969 - 980
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (892 KB)

    The performance of Hebbian-type associative memories (HAMs) in the presence of faulty (open- and short-circuit) synaptic interconnections is examined and equations for predicting network reliability are developed. The results show that a network with open-circuit interconnection faults has a higher probability of direct convergence than a network with short-circuit interconnection faults when the ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving generalization performance using double backpropagation

    Publication Year: 1992, Page(s):991 - 997
    Cited by:  Papers (26)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (552 KB)

    In order to generalize from a training set to a test set, it is desirable that small changes in the input space of a pattern do not change the output components. This can be done by forcing this behavior as part of the training algorithm. This is done in double backpropagation by forming an energy function that is the sum of the normal energy term found in backpropagation and an additional term th... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Second-order neural nets for constrained optimization

    Publication Year: 1992, Page(s):1021 - 1024
    Cited by:  Papers (13)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (312 KB)

    Analog neural nets for constrained optimization are proposed as an analogue of Newton's algorithm in numerical analysis. The neural model is globally stable and can converge to the constrained stationary points. Nonlinear neurons are introduced into the net, making it possible to solve optimization problems where the variables take discrete values, i.e., combinatorial optimization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Guaranteed convergence in a class of Hopfield networks

    Publication Year: 1992, Page(s):951 - 961
    Cited by:  Papers (27)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (956 KB)

    A class of symmetric Hopfield networks with nonpositive synapses and zero threshold is analyzed in detail. It is shown that all stationary points have a one-to-one correspondence with the minimal vertex covers of certain undirected graphs, that the sequential Hopfield algorithm as applied to this class of networks converges in at most 2n steps (n being the number of neurons), and... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gaussian networks for direct adaptive control

    Publication Year: 1992, Page(s):837 - 863
    Cited by:  Papers (632)  |  Patents (8)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2436 KB)

    A direct adaptive tracking control architecture is proposed and evaluated for a class of continuous-time nonlinear dynamic systems for which an explicit linear parameterization of the uncertainty in the dynamics is either unknown or impossible. The architecture uses a network of Gaussian radial basis functions to adaptively compensate for the plant nonlinearities. Under mild assumptions about the ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A two-phase optimization neural network

    Publication Year: 1992, Page(s):1003 - 1009
    Cited by:  Papers (29)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (532 KB)

    A novel two-phase neural network that is suitable for solving a large class of constrained or unconstrained optimization problem is presented. For both types of problems with solutions lying in the interior of the feasible regions, the phase-one structure of the network alone is sufficient. When the solutions of constrained problems are on the boundary of the feasible regions, the proposed two-pha... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wavelet networks

    Publication Year: 1992, Page(s):889 - 898
    Cited by:  Papers (573)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (800 KB)

    A wavelet network concept, which is based on wavelet transform theory, is proposed as an alternative to feedforward neural networks for approximating arbitrary nonlinear functions. The basic idea is to replace the neurons by `wavelons', i.e., computing units obtained by cascading an affine transform and a multidimensional wavelet. Then these affine transforms and the synaptic weights must be ident... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Can backpropagation error surface not have local minima

    Publication Year: 1992, Page(s):1019 - 1021
    Cited by:  Papers (31)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (288 KB)

    It is shown theoretically that for an arbitrary T-element training set with t(tT) different inputs, the backpropagation error surface does not have suboptimal local minima if the network is capable of exactly implementing an arbitrary training set consisting of t different patterns. As a special case, the error surface of a backpropagation network ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification trees with neural network feature extraction

    Publication Year: 1992, Page(s):923 - 933
    Cited by:  Papers (36)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (916 KB)

    The ideal use of small multilayer nets at the decision nodes of a binary classification tree to extract nonlinear features is proposed. The nets are trained and the tree is grown using a gradient-type learning algorithm in the multiclass case. The method improves on standard classification tree design methods in that it generally produces trees with lower error rates and fewer nodes. It also reduc... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A note on the complexity of reliability in neural networks

    Publication Year: 1992, Page(s):998 - 1002
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (368 KB)

    It is shown that in a standard discrete neural network model with small fan-in, tolerance to random malicious faults can be achieved with a log-linear increase in the number of neurons and a constant factor increase in parallel time, provided fan-in can increase arbitrarily. A similar result is obtained for a nonstandard but closely related model with no restriction on fan-in View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks

    Publication Year: 1992, Page(s):864 - 875
    Cited by:  Papers (25)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (948 KB)

    The standard backpropagation-based multilayer perceptron training algorithm suffers from a slow asymptotic convergence rate. Sophisticated nonlinear least-squares and quasi-Newton optimization techniques are used to construct enhanced multilayer perceptron training algorithms, which are then compared to the backpropagation algorithm in the context of several example problems. In addition, an integ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A parallel network for visual cognition

    Publication Year: 1992, Page(s):906 - 922
    Cited by:  Papers (3)  |  Patents (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1564 KB)

    The authors describe a parallel dynamical system designed to integrate model-based and data-driven approaches to image recognition in a neural network, and study one component of the system in detail. That component is the translation-invariant network of probabilistic cellular automata (PCA), which combines feature-detector outputs and collectively performs enhancement and recognition functions. ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weighted learning of bidirectional associative memories by global minimization

    Publication Year: 1992, Page(s):1010 - 1018
    Cited by:  Papers (16)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (628 KB)

    A weighted learning algorithm for bidirectional associative memories (BAMs) by means of global minimization, where each desired pattern is weighted, is described. According to the cost function that measures the goodness of the BAM, the learning algorithm is formulated as a global minimization problem and solved by a gradient descent rule. The learning approach guarantees not only that each desire... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Avoiding false local minima by proper initialization of connections

    Publication Year: 1992, Page(s):899 - 905
    Cited by:  Papers (54)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (688 KB)

    The training of neural net classifiers is often hampered by the occurrence of local minima, which results in the attainment of inferior classification performance. It has been shown that the occurrence of local minima in the criterion function is often related to specific patterns of defects in the classifier. In particular, three main causes for local minima were identified. Such an understanding... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Massively parallel architectures for large scale neural network simulations

    Publication Year: 1992, Page(s):876 - 888
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1000 KB)

    A toroidal lattice architecture (TLA) and a planar lattice architecture (PLA) are proposed as massively parallel neurocomputer architectures for large-scale simulations. The performance of these architectures is almost proportional to the number of node processors, and they adopt the most efficient two-dimensional processor connections for WSI implementation. They also give a solution to the conne... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope