By Topic

Neural Networks, IEEE Transactions on

Issue 1 • Date Jan. 2010

Filter Results

Displaying Results 1 - 22 of 22
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Editorial: The IEEE Transactions on Neural Networks 2010 and Beyond

    Page(s): 1 - 10
    Save to Project icon | Request Permissions | PDF file iconPDF (4159 KB)  
    Freely Available from IEEE
  • Global Synchronization for Discrete-Time Stochastic Complex Networks With Randomly Occurred Nonlinearities and Mixed Time Delays

    Page(s): 11 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (902 KB) |  | HTML iconHTML  

    In this paper, the problem of stochastic synchronization analysis is investigated for a new array of coupled discrete-time stochastic complex networks with randomly occurred nonlinearities (RONs) and time delays. The discrete-time complex networks under consideration are subject to: (1) stochastic nonlinearities that occur according to the Bernoulli distributed white noise sequences; (2) stochastic disturbances that enter the coupling term, the delayed coupling term as well as the overall network; and (3) time delays that include both the discrete and distributed ones. Note that the newly introduced RONs and the multiple stochastic disturbances can better reflect the dynamical behaviors of coupled complex networks whose information transmission process is affected by a noisy environment (e.g., Internet-based control systems). By constructing a novel Lyapunov-like matrix functional, the idea of delay fractioning is applied to deal with the addressed synchronization analysis problem. By employing a combination of the linear matrix inequality (LMI) techniques, the free-weighting matrix method and stochastic analysis theories, several delay-dependent sufficient conditions are obtained which ensure the asymptotic synchronization in the mean square sense for the discrete-time stochastic complex networks with time delays. The criteria derived are characterized in terms of LMIs whose solution can be solved by utilizing the standard numerical software. A simulation example is presented to show the effectiveness and applicability of the proposed results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Effective Method of Pruning Support Vector Machine Classifiers

    Page(s): 26 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1856 KB) |  | HTML iconHTML  

    Support vector machine (SVM) classifiers often contain many SVs, which lead to high computational cost at runtime and potential overfitting. In this paper, a practical and effective method of pruning SVM classifiers is systematically developed. The kernel row vectors, with one-to-one correspondence to the SVs, are first organized into clusters. The pruning work is divided into two phases. In the first phase, orthogonal projections (OPs) are performed to find kernel row vectors that can be approximated by the others. In the second phase, the previously found vectors are removed, and crosswise propagations, which simply utilize the coefficients of OPs, are implemented within each cluster. The method circumvents the problem of explicitly discerning SVs in the high-dimensional feature space as the SVM formulation does, and does not involve local minima. With different parameters, 3000 experiments were run on the LibSVM software platform. After pruning 42% of the SVs, the average change in classification accuracy was only - 0.7%, and the average computation time for removing one SV was 0.006 of the training time. In some scenarios, over 90% of the SVs were pruned with less than 0.1% reduction in classification accuracy. The experiments demonstrate the existence of large numbers of superabundant SVs in trained SVMs, and suggest a synergistic use of training and pruning in practice. Many SVMs already used in applications could be upgraded by pruning nearly half of their SVs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global Asymptotic Stability of Reaction–Diffusion Cohen–Grossberg Neural Networks With Continuously Distributed Delays

    Page(s): 39 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (525 KB) |  | HTML iconHTML  

    This paper is concerned with the global asymptotic stability of a class of reaction-diffusion Cohen-Grossberg neural networks with continuously distributed delays. Under some suitable assumptions and using a matrix decomposition method, we apply the linear matrix inequality (LMI) method to propose some new sufficient stability conditions for the reaction-diffusion Cohen-Grossberg neural networks with continuously distributed delays. The obtained results are easy to check and improve upon the existing stability results. Some remarks are given to show the advantages of the obtained results over the previous results. An example is also given to demonstrate the effectiveness of the obtained results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Output Feedback Control of a Quadrotor UAV Using Neural Networks

    Page(s): 50 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (763 KB) |  | HTML iconHTML  

    In this paper, a new nonlinear controller for a quadrotor unmanned aerial vehicle (UAV) is proposed using neural networks (NNs) and output feedback. The assumption on the availability of UAV dynamics is not always practical, especially in an outdoor environment. Therefore, in this work, an NN is introduced to learn the complete dynamics of the UAV online, including uncertain nonlinear terms like aerodynamic friction and blade flapping. Although a quadrotor UAV is underactuated, a novel NN virtual control input scheme is proposed which allows all six degrees of freedom (DOF) of the UAV to be controlled using only four control inputs. Furthermore, an NN observer is introduced to estimate the translational and angular velocities of the UAV, and an output feedback control law is developed in which only the position and the attitude of the UAV are considered measurable. It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle. The effectiveness of proposed output feedback control scheme is then demonstrated in the presence of unknown nonlinear dynamics and disturbances, and simulation results are included to demonstrate the theoretical conjecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impulsive Control and Synchronization for Delayed Neural Networks With Reaction–Diffusion Terms

    Page(s): 67 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1310 KB) |  | HTML iconHTML  

    This paper discuss the global exponential stability and synchronization of the delayed reaction-diffusion neural networks with Dirichlet boundary conditions under the impulsive control in terms of p-norm and point out the fact that there is no constant equilibrium point other than the origin for the reaction-diffusion neural networks with Dirichlet boundary conditions. Some new and useful conditions dependent on the diffusion coefficients are obtained to guarantee the global exponential stability and synchronization of the addressed neural networks under the impulsive controllers we assumed. Finally, some numerical examples are given to demonstrate the effectiveness of the proposed control methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind Separation of Mutually Correlated Sources Using Precoders

    Page(s): 82 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (393 KB) |  | HTML iconHTML  

    This paper studies the problem of blind source separation (BSS) from instantaneous mixtures with the assumption that the source signals are mutually correlated. We propose a novel approach to BSS by using precoders in transmitters. We show that if the precoders are properly designed, some cross-correlation coefficients of the coded signals can be forced to be zero at certain time lags. Then, the unique correlation properties of the coded signals can be exploited in receiver to achieve source separation. Based on the proposed precoders, a subspace-based algorithm is derived for the blind separation of mutually correlated sources. The effectiveness of the algorithm is illustrated by simulation examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel Weighting-Delay-Based Stability Criteria for Recurrent Neural Networks With Time-Varying Delay

    Page(s): 91 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (590 KB) |  | HTML iconHTML  

    In this paper, a weighting-delay-based method is developed for the study of the stability problem of a class of recurrent neural networks (RNNs) with time-varying delay. Different from previous results, the delay interval [0, d(t)] is divided into some variable subintervals by employing weighting delays. Thus, new delay-dependent stability criteria for RNNs with time-varying delay are derived by applying this weighting-delay method, which are less conservative than previous results. The proposed stability criteria depend on the positions of weighting delays in the interval [0, d(t)], which can be denoted by the weighting-delay parameters. Different weighting-delay parameters lead to different stability margins for a given system. Thus, a solution based on optimization methods is further given to calculate the optimal weighting-delay parameters. Several examples are provided to verify the effectiveness of the proposed criteria. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Dirichlet Process Mixture of Generalized Dirichlet Distributions for Proportional Data Modeling

    Page(s): 107 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1720 KB) |  | HTML iconHTML  

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relevance Units Latent Variable Model and Nonlinear Dimensionality Reduction

    Page(s): 123 - 135
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1354 KB) |  | HTML iconHTML  

    A new dimensionality reduction method, called relevance units latent variable model (RULVM), is proposed in this paper. RULVM has a close link with the framework of Gaussian process latent variable model (GPLVM) and it originates from a recently developed sparse kernel model called relevance units machine (RUM). RUM follows the idea of relevance vector machine (RVM) under the Bayesian framework but releases the constraint that relevance vectors (RVs) have to be selected from the input vectors. RUM treats relevance units (RUs) as part of the parameters to be learned from the data. As a result, a RUM maintains all the advantages of RVM and offers superior sparsity. RULVM inherits the advantages of sparseness offered by the RUM and the experimental result shows that RULVM algorithm possesses considerable computational advantages over GPLVM algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • eFSM—A Novel Online Neural-Fuzzy Semantic Memory Model

    Page(s): 136 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1984 KB) |  | HTML iconHTML  

    Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. T- - his enables eFSM to maintain a current and compact set of Mamdani-type if-then fuzzy rules that collectively generalizes and describes the salient associative mappings between the inputs and outputs of the underlying process being modeled. The learning and modeling performances of the proposed eFSM are evaluated using several benchmark applications and the results are encouraging. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • OP-ELM: Optimally Pruned Extreme Learning Machine

    Page(s): 158 - 162
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both computational time and accuracy (mean square error) are compared to the original ELM and to three other widely used methodologies: multilayer perceptron (MLP), support vector machine (SVM), and Gaussian process (GP). As the experiments for both regression and classification illustrate, the proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM. Despite the simplicity and fast performance, the OP-ELM is still able to maintain an accuracy that is comparable to the performance of the SVM. A toolbox for the OP-ELM is publicly available online. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feature Fusion Using Locally Linear Embedding for Classification

    Page(s): 163 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    In most complex classification problems, many types of features have been captured or extracted. Feature fusion is used to combine features for better classification and to reduce data dimensionality. Kernel-based feature fusion methods are very effective for classification, but they do not reduce data dimensionality. In this brief, we propose an effective feature fusion method using locally linear embedding (LLE). The proposed method overcomes the limitations of LLE, which could not handle different types of features and is inefficient for classification. We propose an efficient algorithm to solve the optimization problem in obtaining weights of different features, and design an efficient method for LLE-based classification. In comparison to other kernel-based feature fusion methods, the proposed method fuses features to a significantly lower dimensional feature space with the same discriminant power. We have conducted experiments to demonstrate the effectiveness of the proposed feature fusion method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exponential Stability on Stochastic Neural Networks With Discrete Interval and Distributed Delays

    Page(s): 169 - 175
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    This brief addresses the stability analysis problem for stochastic neural networks (SNNs) with discrete interval and distributed time-varying delays. The interval time-varying delay is assumed to satisfy 0 < d1 ?? d(t) ?? d2 and is described as d(t) = d 1+h(t) with 0 ?? h(t) ?? d 2 - d 1. Based on the idea of partitioning the lower bound d 1, new delay-dependent stability criteria are presented by constructing a novel Lyapunov-Krasovskii functional, which can guarantee the new stability conditions to be less conservative than those in the literature. The obtained results are formulated in the form of linear matrix inequalities (LMIs). Numerical examples are provided to illustrate the effectiveness and less conservatism of the developed results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Discrete-Time Dynamics of a Class of Self-Stabilizing MCA Extraction Algorithms

    Page(s): 175 - 181
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (293 KB) |  | HTML iconHTML  

    The minor component analysis (MCA) deals with the recovery of the eigenvector associated to the smallest eigenvalue of the autocorrelation matrix of the input dada, and it is a very important tool for signal processing and data analysis. This brief analyzes the convergence and stability of a class of self-stabilizing MCA algorithms via a deterministic discrete-time (DDT) method. Some sufficient conditions are obtained to guarantee the convergence of these learning algorithms. Simulations are carried out to further illustrate the theoretical results achieved. It can be concluded that these self-stabilizing algorithms can efficiently extract the minor component (MC), and they outperform some existing MCA methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special issue on White Box Nonlinear Prediction Models

    Page(s): 182
    Save to Project icon | Request Permissions | PDF file iconPDF (151 KB)  
    Freely Available from IEEE
  • 2010 IEEE World Congress on Computational Intelligence (WCCI)

    Page(s): 183
    Save to Project icon | Request Permissions | PDF file iconPDF (755 KB)  
    Freely Available from IEEE
  • Explore IEL IEEE's most comprehensive resource [advertisement]

    Page(s): 184
    Save to Project icon | Request Permissions | PDF file iconPDF (345 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks Information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope