IEEE Transactions on Neural Networks

Issue 3 • May 1997

Filter Results

Displaying Results 1 - 25 of 37
  • Comments on "Diagonal recurrent neural networks for dynamic systems control". Reproof of theorems 2 and 4 [with reply]

    Publication Year: 1997, Page(s):811 - 814
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (98 KB)

    In their original paper, C.-C. Ku and K.Y. Lee (ibid., vol.6, p.144-56, 1995) designed a diagonal recurrent neural network architecture for control systems. Liang asserts that a condition assumed in the proof of its convergence does not necessarily apply, and presents alternative theorems and proofs. Lee replies that Liang has misunderstood the original paper, and also that he made mistakes in his... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author's reply And Revision For Time-varying Weights

    Publication Year: 1997, Page(s):813 - 814
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (67 KB)

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern Recognition And Neural Networks [Book Reviews]

    Publication Year: 1997, Page(s):815 - 816
    Request permission for commercial reuse | PDF file iconPDF (24 KB)
    Freely Available from IEEE
  • Neural Network Design [Books in Brief]

    Publication Year: 1997, Page(s): 817
    Request permission for commercial reuse | PDF file iconPDF (6 KB)
    Freely Available from IEEE
  • Computational Intelligence Pc Tools [Books in Brief]

    Publication Year: 1997, Page(s): 817
    Request permission for commercial reuse | PDF file iconPDF (7 KB)
    Freely Available from IEEE
  • A class of neural networks for independent component analysis

    Publication Year: 1997, Page(s):486 - 504
    Cited by:  Papers (204)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (640 KB)

    Independent component analysis (ICA) is a recently developed, useful extension of standard principal component analysis (PCA). The ICA model is utilized mainly in blind separation of unknown source signals from their linear mixtures. In this application only the source signals which correspond to the coefficients of the ICA expansion are of interest. In this paper, we propose neural structures rel... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive control using neural networks and approximate models

    Publication Year: 1997, Page(s):475 - 485
    Cited by:  Papers (253)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (416 KB)

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural control... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Acquiring rule sets as a product of learning in a logical neural architecture

    Publication Year: 1997, Page(s):461 - 474
    Cited by:  Papers (32)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (312 KB)

    Envisioning neural networks as systems that learn rules calls forth the verification issues already being studied in knowledge-based systems engineering, and complicates these with neural-network concepts such as nonlinear dynamics and distributed memories. We show that the issues can be clarified and the learned rules visualized symbolically by formalizing the semantics of rule-learning in the ma... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved recurrent neural network for M-PAM symbol detection

    Publication Year: 1997, Page(s):779 - 783
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (168 KB)

    In this paper, a fully connected recurrent neural network (RNN) is presented for the recovery of M-ary pulse amplitude modulated (M-PAM) signals in the presence of intersymbol interference and additive white Gaussian noise. The network makes use of two different activation functions. One is the traditional two-level sigmoid function, which is used at its hidden nodes, and the other is the M-level ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonal projections applied to the assignment problem

    Publication Year: 1997, Page(s):774 - 778
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (184 KB)

    This paper presents a significant improvement to the traditional neural approach to the assignment problem (AP). The technique is based on identifying the feasible space (F) with a linear subspace of R(n2), and then analyzing the orthogonal projection onto F. The formula for the orthogonal projection is shown to be simple and easy to integrate into the traditional neural model. This pro... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SOIM: a self-organizing invertible map with applications in active vision

    Publication Year: 1997, Page(s):758 - 773
    Cited by:  Papers (7)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (376 KB)

    We propose a novel neural network, called the self-organized invertible map (SOIM), that is capable of learning many-to-one functionals mappings in a self-organized and online fashion. The design and performance of the SOIM are highlighted by learning a many-to-one functional mapping that exists in active vision for spatial representation of three-dimensional point targets. The learned spatial rep... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementations of artificial neural networks using current-mode pulse width modulation technique

    Publication Year: 1997, Page(s):532 - 548
    Cited by:  Papers (16)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (532 KB)

    The use of a current-mode pulse width modulation (CM-PWM) technique to implement analog artificial neural networks (ANNs) is presented. This technique can be used to efficiently implement the weighted summation operation (WSO) that are required in the realization of a general ANN. The sigmoidal transformation is inherently performed by the nonlinear transconductance amplifier, which is a key compo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time classification of rotating shaft loading conditions using artificial neural networks

    Publication Year: 1997, Page(s):748 - 757
    Cited by:  Papers (34)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (192 KB)

    Vibration analysis can give an indication of the condition of a rotating shaft highlighting potential faults such as unbalance and rubbing. Faults may however only occur intermittently and consequently to detect these requires continuous monitoring with real time analysis. This paper describes the use of artificial neural networks (ANNs) for classification of condition and compares these with othe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new evolutionary system for evolving artificial neural networks

    Publication Year: 1997, Page(s):694 - 713
    Cited by:  Papers (409)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (396 KB)

    This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP). Unlike most previous studies on evolving ANN's, this paper puts its emphasis on evolving ANN's behaviors. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviors. Clos... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural-network feature selector

    Publication Year: 1997, Page(s):654 - 662
    Cited by:  Papers (153)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (216 KB)

    Feature selection is an integral part of most learning algorithms. Due to the existence of irrelevant and redundant attributes, by selecting only the relevant attributes of the data, higher predictive accuracy can be expected from a machine learning method. In this paper, we propose the use of a three-layer feedforward neural network to select those input attributes that are most useful for discri... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On convergence properties of pocket algorithm

    Publication Year: 1997, Page(s):623 - 629
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (280 KB)

    The problem of finding optimal weights for a single threshold neuron starting from a general training set is considered. Among the variety of possible learning techniques, the pocket algorithm has a proper convergence theorem which asserts its optimality. However, the original proof ensures the asymptotic achievement of an optimal weight vector only if the inputs in the training set are integer or... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An iterative pruning algorithm for feedforward neural networks

    Publication Year: 1997, Page(s):519 - 531
    Cited by:  Papers (119)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (340 KB)

    The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach for tackling this problem is commonly known as pruning and it consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Temporal and spatial stability in translation invariant linear resistive networks

    Publication Year: 1997, Page(s):736 - 747
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (732 KB)

    Simple algebraic methods are proposed to evaluate the temporal and spatial stability of translation invariant linear resistive networks. Temporal stability is discussed for a finite number of nodes n. The proposed method evaluates stability of a Toeplitz pencil An(a)+μBn(b) in terms of parameters ai and bi. In many cases a simple method allows one to... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantum neural networks (QNNs): inherently fuzzy feedforward neural networks

    Publication Year: 1997, Page(s):679 - 693
    Cited by:  Papers (85)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (628 KB)

    This paper introduces quantum neural networks (QNNs), a class of feedforward neural networks (FFNNs) inherently capable of estimating the structure of a feature space in the form of fuzzy sets. The hidden units of these networks develop quantized representations of the sample information provided by the training data set in various graded levels of certainty. Unlike other approaches attempting to ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast parallel off-line training of multilayer perceptrons

    Publication Year: 1997, Page(s):646 - 653
    Cited by:  Papers (30)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (184 KB)

    Various approaches to the parallel implementation of second-order gradient-based multilayer perceptron training algorithms are described. Two main classes of algorithm are defined involving Hessian and conjugate gradient-based methods. The limited- and full-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithms are selected as representative examples and used to show that the step size and grad... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extended least squares based algorithm for training feedforward networks

    Publication Year: 1997, Page(s):806 - 810
    Cited by:  Papers (23)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (148 KB)

    An extended least squares-based algorithm for feedforward networks is proposed. The weights connecting the last hidden and output layers are first evaluated by least squares algorithm. The weights between input and hidden layers are then evaluated using the modified gradient descent algorithms. This arrangement eliminates the stalling problem experienced by the pure least squares type algorithms; ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supervised learning of perceptron and output feedback dynamic networks: a feedback analysis via the small gain theorem

    Publication Year: 1997, Page(s):612 - 622
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (424 KB)

    This paper provides a time-domain feedback analysis of the perceptron learning algorithm and of training schemes for dynamic networks with output feedback. It studies the robustness performance of the algorithms in the presence of uncertainties that might be due to noisy perturbations in the reference signals or due to modeling mismatch. In particular, bounds are established on the step-size param... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving the error backpropagation algorithm with a modified error function

    Publication Year: 1997, Page(s):799 - 803
    Cited by:  Papers (50)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (188 KB)

    This letter proposes a modified error function to improve the error backpropagation (EBP) algorithm of multilayer perceptrons (MLPs) which suffers from slow learning speed. To accelerate the learning speed of the EBP algorithm, the proposed method reduces the probability that output nodes are near the wrong extreme value of sigmoid activation function. This is acquired through a strong error signa... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On solving systems of linear inequalities with artificial neural networks

    Publication Year: 1997, Page(s):590 - 600
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (380 KB)

    The implementation of the relaxation-projection algorithm by artificial neural networks to solve sets of linear inequalities is examined. The different versions of this algorithm are described, and theoretical convergence results are given. The best known analog optimization solvers are shown to use the simultaneous projection version of it. Neural networks that implement each version are describe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A methodology for constructing fuzzy algorithms for learning vector quantization

    Publication Year: 1997, Page(s):505 - 518
    Cited by:  Papers (46)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (612 KB)

    This paper presents a general methodology for the development of fuzzy algorithms for learning vector quantization (FALVQ). The design of specific FALVQ algorithms according to existing approaches reduces to the selection of the membership function assigned to the weight vectors of an LVQ competitive neural network, which represent the prototypes. The development of a broad variety of FALVQ algori... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope