By Topic

Neural Networks, IEEE Transactions on

Issue 12 • Date Dec. 2010

Filter Results

Displaying Results 1 - 20 of 20
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (114 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Neuro-Adaptive Force/Position Control With Prescribed Performance and Guaranteed Contact Maintenance

    Page(s): 1857 - 1868
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (759 KB) |  | HTML iconHTML  

    In this paper, we address unresolved issues in robot force/position tracking including the concurrent satisfaction of contact maintenance, lack of overshoot, desired speed of response, as well as accuracy level. The control objective is satisfied under uncertainties in the force deformation model and disturbances acting at the joints. The unknown nonlinearities that arise owing to the uncertainties in the force deformation model are approximated by a neural network linear in the weights and it is proven that the neural network approximation holds for all time irrespective of the magnitude of the modeling error, the disturbances, and the controller gains. Thus, the controller gains are easily selected, and potentially large neural network approximation errors as well as disturbances can be tolerated. Simulation results on a 6-DOF robot confirm the theoretical findings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability Analysis of Multiplicative Update Algorithms and Application to Nonnegative Matrix Factorization

    Page(s): 1869 - 1881
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB) |  | HTML iconHTML  

    Multiplicative update algorithms have proved to be a great success in solving optimization problems with nonnegativity constraints, such as the famous nonnegative matrix factorization (NMF) and its many variants. However, despite several years of research on the topic, the understanding of their convergence properties is still to be improved. In this paper, we show that Lyapunov's stability theory provides a very enlightening viewpoint on the problem. We prove the exponential or asymptotic stability of the solutions to general optimization problems with nonnegative constraints, including the particular case of supervised NMF, and finally study the more difficult case of unsupervised NMF. The theoretical results presented in this paper are confirmed by numerical simulations involving both supervised and unsupervised NMF, and the convergence speed of NMF multiplicative updates is investigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computing and Analyzing the Sensitivity of MLP Due to the Errors of the i.i.d. Inputs and Weights Based on CLT

    Page(s): 1882 - 1891
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (875 KB) |  | HTML iconHTML  

    In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decoding Stimulus-Reward Pairing From Local Field Potentials Recorded From Monkey Visual Cortex

    Page(s): 1892 - 1902
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1633 KB) |  | HTML iconHTML  

    Single-trial decoding of brain recordings is a real challenge, since it pushes the signal-to-noise ratio issue to the limit. In this paper, we concentrate on the single-trial decoding of stimulus-reward pairing from local field potentials (LFPs) recorded chronically in the visual cortical area V4 of monkeys during a perceptual conditioning task. We developed a set of physiologically meaningful features that can classify and monitor the monkey's training performance. One of these features is based on the recently discovered propagation of waves of LFPs in the visual cortex. Time-frequency features together with spatial features (phase synchrony and wave propagation) yield, after applying a feature selection procedure, an exceptionally good single-trial classification performance, even when using a linear classifier. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Condensed Vector Machines: Learning Fast Machine for Large Data

    Page(s): 1903 - 1914
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (647 KB) |  | HTML iconHTML  

    Scalability is one of the main challenges for kernel-based methods and support vector machines (SVMs). The quadratic demand in memory for storing kernel matrices makes it impossible for training on million-size data. Sophisticated decomposition algorithms have been proposed to efficiently train SVMs using only important examples, which ideally are the final support vectors (SVs). However, the ability of the decomposition method is limited to large-scale applications where the number of SVs is still too large for a computer's capacity. From another perspective, the large number of SVs slows down SVMs in the testing phase, making it impractical for many applications. In this paper, we introduce the integration of a vector combination scheme to simplify the SVM solution into an incremental working set selection for SVM training. The main objective of the integration is to maintain a minimal number of final SVs, bringing a minimum resource demand and faster training time. Consequently, the learning machines are more compact and run faster thanks to the small number of vectors included in their solution. Experimental results on large benchmark datasets shows that the proposed condensed SVMs achieve both training and testing efficiency while maintaining a generalization ability equivalent to that of normal SVMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Improvement of Neural Cryptography Using Erroneous Transmitted Information With Error Prediction

    Page(s): 1915 - 1924
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (618 KB) |  | HTML iconHTML  

    Neural cryptography deals with the problem of “key exchange” between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called “Do not Trust My Partner” (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called “Synchronization with Common Secret Feedback” (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple View Clustering Using a Weighted Combination of Exemplar-Based Mixture Models

    Page(s): 1925 - 1938
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (437 KB) |  | HTML iconHTML  

    Multiview clustering partitions a dataset into groups by simultaneously considering multiple representations (views) for the same instances. Hence, the information available in all views is exploited and this may substantially improve the clustering result obtained by using a single representation. Usually, in multiview algorithms all views are considered equally important, something that may lead to bad cluster assignments if a view is of poor quality. To deal with this problem, we propose a method that is built upon exemplar-based mixture models, called convex mixture models (CMMs). More specifically, we present a multiview clustering algorithm, based on training a weighted multiview CMM, that associates a weight with each view and learns these weights automatically. Our approach is computationally efficient and easy to implement, involving simple iterative computations. Experiments with several datasets confirm the advantages of assigning weights to the views and the superiority of our framework over single-view and unweighted multiview CMMs, as well as over another multiview algorithm which is based on kernel canonical correlation analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Periodic Activation Function and a Modified Learning Algorithm for the Multivalued Neuron

    Page(s): 1939 - 1949
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (473 KB) |  | HTML iconHTML  

    In this paper, we consider a new periodic activation function for the multivalued neuron (MVN). The MVN is a neuron with complex-valued weights and inputs/output, which are located on the unit circle. Although the MVN outperforms many other neurons and MVN-based neural networks have shown their high potential, the MVN still has a limited capability of learning highly nonlinear functions. A periodic activation function, which is introduced in this paper, makes it possible to learn nonlinearly separable problems and non-threshold multiple-valued functions using a single multivalued neuron. We call this neuron a multivalued neuron with a periodic activation function (MVN-P). The MVN-Ps functionality is much higher than that of the regular MVN. The MVN-P is more efficient in solving various classification problems. A learning algorithm based on the error-correction rule for the MVN-P is also presented. It is shown that a single MVN-P can easily learn and solve those benchmark classification problems that were considered unsolvable using a single neuron. It is also shown that a universal binary neuron, which can learn nonlinearly separable Boolean functions, and a regular MVN are particular cases of the MVN-P. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization Methods for Spiking Neurons and Networks

    Page(s): 1950 - 1962
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1005 KB) |  | HTML iconHTML  

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron's output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas-Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixing Linear SVMs for Nonlinear Classification

    Page(s): 1963 - 1975
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (714 KB) |  | HTML iconHTML  

    In this paper, we address the problem of combining linear support vector machines (SVMs) for classification of large-scale nonlinear datasets. The motivation is to exploit both the efficiency of linear SVMs (LSVMs) in learning and prediction and the power of nonlinear SVMs in classification. To this end, we develop a LSVM mixture model that exploits a divide-and-conquer strategy by partitioning the feature space into subregions of linearly separable datapoints and learning a LSVM for each of these regions. We do this implicitly by deriving a generative model over the joint data and label distributions. Consequently, we can impose priors on the mixing coefficients and do implicit model selection in a top-down manner during the parameter estimation process. This guarantees the sparsity of the learned model. Experimental results show that the proposed method can achieve the efficiency of LSVMs in the prediction phase while still providing a classification performance comparable to nonlinear SVMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Curve Clustering Based on a Multivariate t -Distribution Model

    Page(s): 1976 - 1984
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (723 KB) |  | HTML iconHTML  

    This brief presents a curve clustering technique based on a new multivariate model. Instead of the usual Gaussian random effect model, our method uses the multivariate -distribution model which has better robustness to outliers and noise. In our method, we use the B-spline curve to model curve data and apply the mixed-effects model to capture the randomness and covariance of all curves within the same cluster. After fitting the B-spline-based mixed-effects model to the proposed multivariate t-distribution, we derive an expectation-maximization algorithm for estimating the parameters of the model, and apply the proposed approach to the simulated data and the real dataset. The experimental results show that our model yields better clustering results when compared to the conventional Gaussian random effect model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IPADE: Iterative Prototype Adjustment for Nearest Neighbor Classification

    Page(s): 1984 - 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB) |  | HTML iconHTML  

    Nearest prototype methods are a successful trend of many pattern classification tasks. However, they present several shortcomings such as time response, noise sensitivity, and storage requirements. Data reduction techniques are suitable to alleviate these drawbacks. Prototype generation is an appropriate process for data reduction, which allows the fitting of a dataset for nearest neighbor (NN) classification. This brief presents a methodology to learn iteratively the positioning of prototypes using real parameter optimization procedures. Concretely, we propose an iterative prototype adjustment technique based on differential evolution. The results obtained are contrasted with nonparametric statistical tests and show that our proposal consistently outperforms previously proposed methods, thus becoming a suitable tool in the task of enhancing the performance of the NN classifier. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear Discriminant Analysis for Signatures

    Page(s): 1990 - 1996
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (183 KB) |  | HTML iconHTML  

    We propose signature linear discriminant analysis (signature-LDA) as an extension of LDA that can be applied to signatures, which are known to be more informative representations of local image features than vector representations, such as visual word histograms. Based on earth mover's distances between signatures, signature-LDA does not require vectorization of local image features in contrast to LDA, which is one of the main limitations of classical LDA. Therefore, signature-LDA minimizes the loss of intrinsic information of local image features while selecting more discriminating features using label information. Empirical evidence on texture databases shows that signature-LDA improves upon state-of-the-art approaches for texture image classification and outperforms other feature selection methods for local image features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2010 Index IEEE Transactions on Neural Networks Vol. 21

    Page(s): 1997 - 2018
    Save to Project icon | Request Permissions | PDF file iconPDF (221 KB)  
    Freely Available from IEEE
  • Call for papers IEEE Transactions on Neural Networks Special Issue: Online Learning in Kernel Methods

    Page(s): 2019
    Save to Project icon | Request Permissions | PDF file iconPDF (150 KB)  
    Freely Available from IEEE
  • Access over 1 million articles - The IEEE Digital Library [advertisement]

    Page(s): 2020
    Save to Project icon | Request Permissions | PDF file iconPDF (370 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks Information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope