By Topic

Neural Networks, IEEE Transactions on

Issue 4 • Date July 1999

Filter Results

Displaying Results 1 - 25 of 31
  • Design of GBSB neural associative memories using semidefinite programming

    Page(s): 946 - 950
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (102 KB)  

    This paper concerns reliable search for the optimally performing GBSB (generalized brain-state-in-a-box) neural associative memory given a set of prototype patterns to be stored as stable equilibrium points. First, we observe some new qualitative properties of the GBSB model. Next, we formulate the synthesis of GBSB neural associative memories as a constrained optimization problem. Finally, we convert the optimization problem into a semidefinite program (SDP), which can be solved efficiently by recently developed interior point methods. The validity of this approach is illustrated by a design example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional networks with applications: a neural-based paradigm [Book Review]

    Page(s): 982
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE
  • Independent component analysis: theory and applications [Book Review]

    Page(s): 982
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE
  • FEM-based neural-network approach to nonlinear modeling with application to longitudinal vehicle dynamics control

    Page(s): 885 - 897
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    An finite-element methods (FEM)-based neural-network approach to nonlinear autoregressive with exogenous input (NARX) modeling is presented. The method uses multilinear interpolation functions on C0 rectangular elements. The local and global structure of the resulting model is analyzed. It is shown that the model can be interpreted both as a local model network and a single layer feedforward neural network. The main aim is to use the model for nonlinear control design. The proposed FEM NARX description is easily accessible to feedback linearizing control techniques. Its use with a two-degrees of freedom nonlinear internal model controller is discussed. The approach is applied to modeling of the nonlinear longitudinal dynamics of an experimental lorry, using measured data. The modeling results are compared with local model network and multilayer perceptron approaches. A nonlinear speed controller was designed based on the identified FEM model. The controller was implemented in a test vehicle, and several experimental results are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast U-D factorization-based learning algorithm with applications to nonlinear system modeling and identification

    Page(s): 930 - 938
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    A fast learning algorithm for training multilayer feedforward neural networks (FNN) by using a fading memory extended Kalman filter (FMEKF) is presented first, along with a technique using a self-adjusting time-varying forgetting factor. Then a U-D factorization-based FMEKF is proposed to further improve the learning rate and accuracy of the FNN. In comparison with the backpropagation (BP) and existing EKF-based learning algorithms, the proposed U-D factorization-based FMEKF algorithm provides much more accurate learning results, using fewer hidden nodes. It has improved convergence rate and numerical stability (robustness). In addition, it is less sensitive to start-up parameters (e.g., initial weights and covariance matrix) and the randomness in the observed data. It also has good generalization ability and needs less training time to achieve a specified learning accuracy. Simulation results in modeling and identification of nonlinear dynamic systems are given to show the effectiveness and efficiency of the proposed algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new error function at hidden layers for past training of multilayer perceptrons

    Page(s): 960 - 964
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    This paper proposes a new error function at hidden layers to speed up the training of multilayer perceptrons (MLPs). With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence was obtained for MLPs without the stalling problem experienced in conventional LBL algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting neutron diffusion eigenvalues with a query-based adaptive neural architecture

    Page(s): 790 - 800
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    A query-based approach for adaptively retraining and restructuring a two-hidden-layer artificial neural network (ANN) has been developed for the speedy prediction of the fundamental mode eigenvalue of the neutron diffusion equation, a standard nuclear reactor core design calculation which normally requires the iterative solution of a large-scale system of nonlinear partial differential equations (PDEs). The approach developed focuses primarily upon the adaptive selection of training and cross-validation data and on artificial neural-network (ANN) architecture adjustments, with the objective of improving the accuracy and generalization properties of ANN-based neutron diffusion eigenvalue predictions. For illustration, the performance of a “bare bones” feedforward multilayer perceptron (MLP) is upgraded through a variety of techniques; namely, nonrandom initial training set selection, adjoint function input weighting, teacher-student membership and equivalence queries for generation of appropriate training data, and a dynamic node architecture (DNA) implementation. The global methodology is flexible in that it ran “wrap around” any specific training algorithm selected for the static calculations (i.e., training iterations with a fixed training set and architecture). Finally, the improvements obtained are carefully contrasted against past works reported in the literature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind equalization of a noisy channel by linear neural network

    Page(s): 918 - 924
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    In this paper, a new neural approach is introduced for the problem of blind equalization in digital communications. Necessary and sufficient conditions for blind equalization are proposed, which can be implemented by a two-layer linear neural network, in the hidden layer, the received signals are whitened, while the network outputs provide directly an estimation of the source symbols. We consider a stochastic approximate learning algorithm for each layer according to the property of the correlation matrices of the transmitted symbols. The proposed class of networks yield good results in simulation examples for the blind equalization of a three-ray multipath channel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Handwritten digit recognition by adaptive-subspace self-organizing map (ASSOM)

    Page(s): 939 - 945
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    The adaptive-subspace self-organizing map (ASSOM) proposed by Kohonen is a recent development in self-organizing map (SOM) computation. In this paper, we propose a method to realize ASSOM using a neural learning algorithm in nonlinear autoencoder networks. Our method has the advantage of numerical stability. We have applied our ASSOM model to build a modular classification system for handwritten digit recognition. Ten ASSOM modules are used to capture different features in the ten classes of digits. When a test digit is presented to all the modules, each module provides a reconstructed pattern and the system outputs a class label by comparing the ten reconstruction errors. Our experiments show promising results. For relatively small size modules, the classification accuracy reaches 99.3% on the training set and over 97% on the testing set View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Function approximation-fast-convergence neural approach based on spectral analysis

    Page(s): 725 - 740
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    We propose a constructive approach to building single-hidden-layer neural networks for nonlinear function approximation using frequency domain analysis. We introduce a spectrum-based learning procedure that minimizes the difference between the spectrum of the training data and the spectrum of the network's estimates. The network is built up incrementally during training and automatically determines the appropriate number of hidden units. This technique achieves similar or better approximation with faster convergence times than traditional techniques such as backpropagation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Solving graph algorithms with networks of spiking neurons

    Page(s): 953 - 957
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    Spatio-temporal coding that combines spatial constraints with temporal sequencing is of great interest to brain-like circuit modelers. In this paper we present some new ideas of how these types of circuits can self-organize. We introduce a temporal correlation rule based on the time difference between the firing of neurons. With the aid of this rule we show an analogy between a graph and a network of spiking neurons. The shortest path, clustering based on the nearest neighbor, and the minimal spanning tree algorithms are solved using the proposed approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neurocomputational model of figure-ground discrimination and target tracking

    Page(s): 860 - 884
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3120 KB)  

    A neurocomputational model is presented for figure-ground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analog design of a new neural network for optical character recognition

    Page(s): 951 - 953
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    An electronic circuit is presented for a new type of neural network, which gives a recognition rate of over 100 kHz. The network is used to classify handwritten numerals, presented as Fourier and wavelet descriptors, and has been shown to train far quicker than the popular backpropagation network while maintaining classification accuracy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discrete-time backpropagation for training synaptic delay-based artificial neural networks

    Page(s): 779 - 789
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    The aim of the paper is to endow a well-known structure for processing time-dependent information, synaptic delay-based ANNs, with a reliable and easy to implement algorithm suitable for training temporal decision processes. In fact, we extend the backpropagation algorithm to discrete-time feedforward networks that include adaptable internal time delays in the synapses. The structure of the network is similar to the one presented by Day and Davenport (1993), that is, in addition to the weights modeling the transmission capabilities of the synaptic connections, we model their length by means of a parameter that indicates the delay a discrete-event suffers when going from the origin neuron to the target neuron through a synaptic connection. Like the weights, these delays are also trainable, and a training algorithm can be derived that is almost as simple as the backpropagation algorithm, and which is really an extension of it. We present examples of the application of these networks and algorithm to the prediction of time series and to the recognition of patterns in electrocardiographic signals. In the first case, we employ the temporal reasoning characteristics of these networks for the prediction of future values in a benchmark example of a time series: the one governed by the Mackey-Glass chaotic equation. In the second case, we provide a real life example. The problem consists in identifying different types of beats through two levels of temporal processing, one relating the morphological features which make up the beat in time and another one that relates the positions of beats in time, that is, considers rhythm characteristics of the ECG signal. In order to do this, the network receives the signal sequentially, no windowing, segmentation, or thresholding are applied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Direct adaptive control of wind energy conversion systems using Gaussian networks

    Page(s): 898 - 906
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    Grid connected wind energy conversion systems (WECS) present interesting control demands, due to the intrinsic nonlinear characteristics of windmills and electric generators. In this paper a direct adaptive control strategy for WECS control is proposed. It is based on the combination of two control actions: a radial basis function network-based adaptive controller, which drives the tracking error to zero with user specified dynamics, and a supervisory controller, based on crude bounds of the system's nonlinearities. The supervisory controller fires when the finite neural-network approximation properties cannot be guaranteed. The form of the supervisor control and the adaptation law for the neural controller are derived from a Lyapunov analysis of stability. The results are applied to a typical turbine/generator pair, showing the feasibility of the proposed solution View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Controlling chaos by GA-based reinforcement learning neural network

    Page(s): 846 - 859
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    Proposes a TD (temporal difference) and GA (genetic algorithm) based reinforcement (TDGAR) neural learning scheme for controlling chaotic dynamical systems based on the technique of small perturbations. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to fulfil the reinforcement learning task. Structurally, the TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network for helping the learning of the other network, the action network, which determines the outputs (actions) of the TDGAR learning system. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. This can usually accelerate the GA learning since an external reinforcement signal may only be available at a time long after a sequence of actions have occurred in the reinforcement learning problems. By defining a simple external reinforcement signal. the TDGAR learning system can learn to produce a series of small perturbations to convert chaotic oscillations of a chaotic system into desired regular ones with a periodic behavior. The proposed method is an adaptive search for the optimum control technique. Computer simulations on controlling two chaotic systems, i.e., the Henon map and the logistic map, have been conducted to illustrate the performance of the proposed method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An ordering algorithm for pattern presentation in fuzzy ARTMAP that tends to improve generalization performance

    Page(s): 768 - 778
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    We introduce a procedure, based on the max-min clustering method, that identifies a fixed order of training pattern presentation for fuzzy adaptive resonance theory mapping (ARTMAP). This procedure is referred to as the ordering algorithm, and the combination of this procedure with fuzzy ARTMAP is referred to as ordered fuzzy ARTMAP. Experimental results demonstrate that ordered fuzzy ARTMAP exhibits a generalization performance that is better than the average generalization performance of fuzzy ARTMAP, and in certain cases as good as, or better than the best fuzzy ARTMAP generalization performance. We also calculate the number of operations required by the ordering algorithm and compare it to the number of operations required by the training phase of fuzzy ARTMAP. We show that, under mild assumptions, the number of operations required by the ordering algorithm is a fraction of the number of operations required by fuzzy ARTMAP View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New stability conditions for Hopfield networks in partial simultaneous update mode

    Page(s): 975 - 978
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB)  

    Cernuschi-Frias proposed (IEEE Trans. Syst., Man, Cybern., vol.19, p.887-8, 1989) a partial simultaneous updating (PSU) mode for Hopfield networks. He also derived sufficient conditions to ensure global stability. In this letter, a counter-example is given to illustrate that the PSU sequence may converge to limited cycles even if one uses a connection matrix satisfying the Cernuschi-Frias conditions. Then, new sufficient conditions ensuring global convergence of a Hopfield network in PSU mode are derived. Compared with the result of fully parallel mode case, the new result permits a little relaxation on the lower bound of the main diagonal elements of the connection matrix View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A geometrical representation of McCulloch-Pitts neural model and its applications

    Page(s): 925 - 929
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB)  

    In this paper, a geometrical representation of McCulloch-Pitts neural model (1943) is presented, From the representation, a clear visual picture and interpretation of the model can be seen. Two interesting applications based on the interpretation are discussed. They are 1) a new design principle of feedforward neural networks and 2) a new proof of mapping abilities of three-layer feedforward neural networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A complex valued radial basis function network for equalization of fast time varying channels

    Page(s): 958 - 960
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of fuzzy systems using neurofuzzy networks

    Page(s): 815 - 827
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1096 KB)  

    Introduces a systematic approach for fuzzy system design based on a class of neural fuzzy networks built upon a general neuron model. The network structure is such that it encodes the knowledge learned in the form of if-then fuzzy rules and processes data following fuzzy reasoning principles. The technique provides a mechanism to obtain rules covering the whole input/output space as well as the membership functions (including their shapes) for each input variable. Such characteristics are of utmost importance in fuzzy systems design and application. In addition, after learning, it is very simple to extract fuzzy rules in the linguistic form. The network has universal approximation capability, a property very useful in, e.g., modeling and control applications. Here we focus on function approximation problems as a vehicle to illustrate its usefulness and to evaluate its performance. Comparisons with alternative approaches are also included. Both, non-noisy and noisy data have been studied and considered in the computational experiments. The neural fuzzy network developed here and, consequently, the underlying approach, has shown to provide good results from the accuracy, complexity, and system design points of view View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A recurrent self-organizing neural fuzzy inference network

    Page(s): 828 - 845
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes initially in the RSONFIN. They are created online via concurrent structure identification and parameter identification. The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Circular backpropagation networks embed vector quantization

    Page(s): 972 - 975
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    This letter proves the equivalence between vector quantization (VQ) classifiers and circular backpropagation (CBP) networks. The calibrated prototypes for a VQ schema can be plugged in a CBP feedforward structure having the same number of hidden neurons and featuring the same mapping. The letter describes how to exploit such equivalence by using VQ prototypes to perform a meaningful initialization for BP optimization. The approach effectiveness was tested considering a real classification problem (NIST handwritten digits) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization, discrimination, and multiple categorization using adaptive resonance theory

    Page(s): 757 - 767
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    The internal competition between categories in the adaptive resonance theory (ART) neural model can be biased by replacing the original choice function by one that contains an attentional tuning parameter under external control. For the same input but different values of the attentional tuning parameter, the network can learn and recall different categories with different degrees of generality, thus permitting the coexistence of both general and specific categorizations of the same set of data. Any number of these categorizations can be learned within one and the same network by virtue of generalization and discrimination properties. A simple model in which the attentional tuning parameter and the vigilance parameter of ART are linked together is described. The self-stabilization property is shown to be preserved for an arbitrary sequence of analog inputs, and for arbitrary orderings of arbitrarily chosen vigilance levels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning continuous trajectories in recurrent neural networks with time-dependent weights

    Page(s): 741 - 756
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    The paper is concerned with a general learning (with arbitrary criterion and state-dependent constraints) of continuous trajectories by means of recurrent neural networks with time-varying weights. The learning process is transformed into an optimal control framework, where the weights to be found are treated as controls. A learning algorithm based on a variational formulation of Pontryagin's maximum principle is proposed. This algorithm is shown to converge, under reasonable conditions, to an optimal solution. The neural networks with time-dependent weights make it possible to efficiently find an admissible solution (i.e., initial weights satisfying state constraints) which then serves as an initial guess to carry out a proper minimization of a given criterion. The proposed methodology may be directly applicable to both classification of temporal sequences and to optimal tracking of nonlinear dynamic systems. Numerical examples are also given which demonstrate the efficiency of the approach presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope