Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Neural Networks, IEEE Transactions on

Issue 1 • Date Jan 2001

Filter Results

Displaying Results 1 - 18 of 18
  • Nonlinear kernel-based statistical pattern analysis

    Publication Year: 2001 , Page(s): 16 - 32
    Cited by:  Papers (71)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (732 KB)  

    The eigenstructure of the second-order statistics of a multivariate random population can be inferred from the matrix of pairwise combinations of inner products of the samples. Therefore, it can be also efficiently obtained in the implicit, high-dimensional feature spaces defined by kernel functions. We elaborate on this property to obtain general expressions for immediate derivation of nonlinear counterparts of a number of standard pattern analysis algorithms, including principal component analysis, data compression and denoising, and Fisher's discriminant. The connection between kernel methods and nonparametric density estimation is also illustrated. Using these results we introduce the kernel version of Mahalanobis distance, which originates nonparametric models with unexpected and interesting properties, and also propose a kernel version of the minimum squared error (MSE) linear discriminant function. This learning machine is particularly simple and includes a number of generalized linear models such as the potential functions method or the radial basis function (RBF) network. Our results shed some light on the relative merit of feature spaces and inductive bias in the remarkable generalization properties of the support vector machine (SVM). Although in most situations the SVM obtains the lowest error rates, exhaustive experiments with synthetic and natural data show that simple kernel machines based on pseudoinversion are competitive in problems with appreciable class overlapping View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multiexpert framework for character recognition: a novel application of Clifford networks

    Publication Year: 2001 , Page(s): 101 - 112
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    A novel multiple-expert framework for recognition of handwritten characters is presented. The proposed framework is composed of multiple classifiers (experts) put together in such a manner as to enhance the recognition capability of the combined network compared to the best performing individual expert participating in the framework. Each of these experts has been derived from a novel neural structure in which the weight values are derived from Clifford algebra. A Clifford algebra is a mathematical paradigm capable of capturing the interdimensional dependencies found in multidimensional data. It offers a technique for concise data storage and processing by representing dependencies between the component dimensions of the data which is otherwise difficult to encode and hence is often employed in analyzing multidimensional data. Results achieved by the proposed multiple-expert framework demonstrates significant improvement over alternative techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hybrid learning scheme combining EM and MASMOD algorithms for fuzzy local linearization modeling

    Publication Year: 2001 , Page(s): 43 - 53
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    Fuzzy local linearization (FLL) is a useful divide-and-conquer method for coping with complex problems such as modeling unknown nonlinear systems from data for state estimation and control. Based on a probabilistic interpretation of FLL, the paper proposes a hybrid learning scheme for FLL modeling, which uses a modified adaptive spline modeling (MASMOD) algorithm to construct the antecedent parts (membership functions) in the FLL model, and an expectation-maximization (EM) algorithm to parameterize the consequent parts (local linear models). The hybrid method not only has an approximation ability as good as most neuro-fuzzy network models, but also produces a parsimonious network structure (gain from MASMOD) and provides covariance information about the model error (gain from EM) which is valuable in applications such as state estimation and control. Numerical examples on nonlinear time-series analysis and nonlinear trajectory estimation using FLL models are presented to validate the derived algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy neural network with general parameter adaptation for modeling of nonlinear time-series

    Publication Year: 2001 , Page(s): 148 - 152
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    By taking advantage of fuzzy systems and neural networks, a fuzzy-neural network with a general parameter (GP) learning algorithm and heuristic model structure determination is proposed in this paper. Our network model is based on the Gaussian radial basis function network (RBFN). We use the flexible GP approach both for initializing the off-line training algorithm and fine-tuning the nonlinear model efficiently in online operation. A modification of the robust unbiasedness criterion using distorter (UCD) is utilized for selecting the structural parameters of this adaptive model. The UCD approach provides the desired modeling accuracy and avoids the risk of over-fitting. In order to illustrate the operation of the proposed modeling scheme, it is experimentally applied to a fault detection application View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive structures with algebraic loops

    Publication Year: 2001 , Page(s): 33 - 42
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    The contraction theorem has many fields of application, including linear algebraic equations, differential and integral equations, control systems theory, optimization, etc. The paper aims at showing how contraction mapping can be applied to the computation and the training of adaptive structures with algebraic loops. These structures are used for the approximation of unknown functional relations (mappings) represented by training sets. The technique is extended to multilayer neural networks with algebraic loops. Application of a two-layer neural network to breast cancer diagnosis is described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new model of self-organizing neural networks and its application in data projection

    Publication Year: 2001 , Page(s): 153 - 158
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (116 KB)  

    In this paper a new model of self-organizing neural networks is proposed. An algorithm called “double self-organizing feature map” (DSOM) algorithm is developed to train the novel model. By the DSOM algorithm the network will adaptively adjust its network structure during the learning phase so as to make neurons responding to similar stimulus have similar weight vectors and spatially move nearer to each other at the same time. The final network structure allows us to visualize high-dimensional data as a two dimensional scatter plot. The resulting representations allow a straightforward analysis of the inherent structure of clusters within the input data. One high-dimensional data set is used to test the effectiveness of the proposed neural networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural network-based adaptive controller design of robotic manipulators with an observer

    Publication Year: 2001 , Page(s): 54 - 67
    Cited by:  Papers (38)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    A neural network (NN)-based adaptive controller with an observer is proposed for the trajectory tracking of robotic manipulators with unknown dynamics nonlinearities. It is assumed that the robotic manipulator has only joint angle position measurements. A linear observer is used to estimate the robot joint angle velocity, while NNs are employed to further improve the control performance of the controlled system through approximating the modified robot dynamics function. The adaptive controller for robots with an observer can guarantee the uniform ultimate bounds of the tracking errors and the observer errors as well as the bounds of the NN weights. For performance comparisons, the conventional adaptive algorithm with an observer using linearity in parameters of the robot dynamics is also developed in the same control framework as the NN approach for online approximating unknown nonlinearities of the robot dynamics. Main theoretical results for designing such an observer-based adaptive controller with the NN approach using multilayer NNs with sigmoidal activation functions, as well as with the conventional adaptive approach using linearity in parameters of the robot dynamics are given. The performance comparisons between the NN approach and the conventional adaptation approach with an observer is carried out to show the advantages of the proposed control approaches through simulation studies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of two different PNN training approaches for satellite cloud data classification

    Publication Year: 2001 , Page(s): 164 - 168
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    Presents a training algorithm for probabilistic neural networks (PNN) using the minimum classification error (MCE) criterion. A comparison is made between the MCE training scheme and the widely used maximum likelihood (ML) learning on a cloud classification problem using satellite imagery data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear blind source separation using a radial basis function network

    Publication Year: 2001 , Page(s): 124 - 134
    Cited by:  Papers (48)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    This paper proposes a novel neural-network approach to blind source separation in nonlinear mixture. The approach utilizes a radial basis function (RBF) neural-network to approximate the inverse of the nonlinear mixing mapping which is assumed to exist and able to be approximated using an RBF network. A contrast function which consists of the mutual information and partial moments of the outputs of the separation system, is defined to separate the nonlinear mixture. The minimization of the contrast function results in the independence of the outputs with desirable moments such that the original sources are separated properly. Two learning algorithms for the parametric RBF network are developed by using the stochastic gradient descent method and an unsupervised clustering method. By virtue of the RBF neural network, this proposed approach takes advantage of high learning convergence rate of weights in the hidden layer and output layer, natural unsupervised learning characteristics, modular structure, and universal approximation capability. Simulation results are presented to demonstrate the feasibility, robustness, and computability of the proposed method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks

    Publication Year: 2001 , Page(s): 135 - 147
    Cited by:  Papers (14)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability of asymmetric Hopfield networks

    Publication Year: 2001 , Page(s): 159 - 163
    Cited by:  Papers (59)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    In this paper, we discuss dynamical behaviors of recurrently asymmetrically connected neural networks in detail. We propose an effective approach to study global and local stability of the networks. Many of well known existing results are unified in our framework, which gives much better test conditions for global and local stability. Sufficient conditions for the uniqueness of the equilibrium point and its stability conditions are given, too View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experiments on the application of IOHMMs to model financial returns series

    Publication Year: 2001 , Page(s): 113 - 123
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    Input-output hidden Markov models (IOHMM) are conditional hidden Markov models in which the emission (and possibly the transition) probabilities can be conditioned on an input sequence. For example, these conditional distributions can be linear, logistic, or nonlinear (using for example multilayer neural networks). We compare the generalization performance of several models which are special cases of input-output hidden Markov models on financial time-series prediction tasks: an unconditional Gaussian, a conditional linear Gaussian, a mixture of Gaussians, a mixture of conditional linear Gaussians, a hidden Markov model, and various IOHMMs. The experiments compare these models on predicting the conditional density of returns of market and sector indices. Note that the unconditional Gaussian estimates the first moment with the historical average. The results show that, although for the first moment the historical average gives the best results, for the higher moments, the IOHMMs yielded significantly better performance, as estimated by the out-of-sample likelihood View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Heuristic pattern correction scheme using adaptively trained generalized regression neural networks

    Publication Year: 2001 , Page(s): 91 - 100
    Cited by:  Papers (13)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation of nonlinear systems with radial basis function neural networks

    Publication Year: 2001 , Page(s): 1 - 15
    Cited by:  Papers (54)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB)  

    A technique for approximating a continuous function of n variables with a radial basis function (RBF) neural network is presented. The method uses an n-dimensional raised-cosine type of RBF that is smooth, yet has compact support. The RBF network coefficients are low-order polynomial functions of the input. A simple computational procedure is presented which significantly reduces the network training and evaluation time. Storage space is also reduced by allowing for a nonuniform grid of points about which the RBFs are centered. The network output is shown to be continuous and have a continuous first derivative. When the network is used to approximate a nonlinear dynamic system, the resulting system is bounded-input bounded-output stable. For the special case of a linear system, the RBF network representation is exact on the domain over which it is defined, and it is optimal in terms of the number of distinct storage parameters required. Several examples are presented which illustrate the effectiveness of this technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid supervisory control using recurrent fuzzy neural network for tracking periodic inputs

    Publication Year: 2001 , Page(s): 68 - 90
    Cited by:  Papers (34)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    A hybrid supervisory control system using a recurrent fuzzy neural network (RFNN) is proposed to control the mover of a permanent magnet linear synchronous motor (PMLSM) servo drive for the tracking of periodic reference inputs. First, the field-oriented mechanism is applied to formulate the dynamic equation of the PMLSM. Then, a hybrid supervisory control system, which combines a supervisory control system and an intelligent control system, is proposed to control the mover of the PMLSM for periodic motion. The supervisory control law is designed based on the uncertainty bounds of the controlled system to stabilize the system states around a predefined bound region. Since the supervisory control law will induce excessive and chattering control effort, the intelligent control system is introduced to smooth and reduce the control effort when the system states are inside the predefined bound region. In the intelligent control system, the RFNN control is the main tracking controller which is used to mimic a idea control law and a compensated control is proposed to compensate the difference between the idea control law and the RFNN control. The RFNN has the merits of fuzzy inference, dynamic mapping and fast convergence speed, In addition, an online parameter training methodology, which is derived using the Lyapunov stability theorem and the gradient descent method, is proposed to increase the learning capability of the RFNN. The proposed hybrid supervisory control system using RFNN can track various periodic reference inputs effectively with robust control performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the use of separable Volterra networks to model discrete-time Volterra systems

    Publication Year: 2001 , Page(s): 174 - 175
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB)  

    A paper by Marmarelis and Zhao (1997) describes the use of what the authors call a “separable Volterra network” for modeling high-order Volterra systems. This model is identical to a parallel cascade of dynamic linear/polynomial static nonlinear elements, which has been extensively studied since 1982 for the same purpose View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear magnetic storage channel equalization using minimal resource allocation network (MRAN)

    Publication Year: 2001 , Page(s): 171 - 174
    Cited by:  Papers (4)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    This letter presents the application of the recently developed minimal radial basis function neural network called minimal resource allocation network (MRAN) for equalization in highly nonlinear magnetic data storage channels. Using a realistic magnetic channel model, MRAN equalizer's performance is compared with the nonlinear neural equalizer of Nair and Moon (1997), referred to as maximum signal-to-distortion ratio (MSDR) equalizer. MSDR equalizer uses a specially designed neural architecture where all the parameters are determined theoretically. Simulation results indicate that MRAN equalizer has better performance than that of MSDR equalizer in terms of higher signal-to-distortion ratios View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An optimization-based design procedure for asymmetric bidirectional associative memories

    Publication Year: 2001 , Page(s): 169 - 170
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB)  

    In this letter, we consider the problem of designing asymmetric bidirectional associative memories (ABAM). Based on a newly derived theorem for the ABAM model, we propose an optimization-based design procedure for obtaining an ABAM that can store given bipolar vector pairs with certain error correction properties. Our design procedure consists of generalized eigenvalue problems, which can be efficiently solved by recently developed interior point methods. The validity of the proposed method is illustrated by a design example View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope