By Topic

Neural Networks, IEEE Transactions on

Issue 11 • Date Nov. 2010

Filter Results

Displaying Results 1 - 20 of 20
  • Table of contents

    Publication Year: 2010 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (110 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Recognition of Partially Occluded and Rotated Images With a Network of Spiking Neurons

    Publication Year: 2010 , Page(s): 1697 - 1709
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1569 KB) |  | HTML iconHTML  

    In this paper, we introduce a novel system for recognition of partially occluded and rotated images. The system is based on a hierarchical network of integrate-and-fire spiking neurons with random synaptic connections and a novel organization process. The network generates integrated output sequences that are used for image classification. The proposed network is shown to provide satisfactory predictive performance given that the number of the recognition neurons and synaptic connections are adjusted to the size of the input image. Comparison of synaptic plasticity activity rule (SAPR) and spike timing dependant plasticity rules, which are used to learn connections between the spiking neurons, indicates that the former gives better results and thus the SAPR rule is used. Test results show that the proposed network performs better than a recognition system based on support vector machines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel Stability Analysis for Recurrent Neural Networks With Multiple Delays via Line Integral-Type L-K Functional

    Publication Year: 2010 , Page(s): 1710 - 1718
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (463 KB) |  | HTML iconHTML  

    This paper studies the stability problem of a class of recurrent neural networks (RNNs) with multiple delays. By using an augmented matrix-vector transformation for delays and a novel line integral-type Lyapunov-Krasovskii functional, a less conservative delay-dependent global asymptotical stability criterion is first proposed for RNNs with multiple delays. The obtained stability result is easy to check and improve upon the existing ones. Then, two numerical examples are given to verify the effectiveness of the proposed criterion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kernel Wiener Filter and Its Application to Pattern Recognition

    Publication Year: 2010 , Page(s): 1719 - 1730
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (506 KB) |  | HTML iconHTML  

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Clifford Support Vector Machines for Classification, Regression, and Recurrence

    Publication Year: 2010 , Page(s): 1731 - 1746
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3324 KB) |  | HTML iconHTML  

    This paper introduces the Clifford support vector machines (CSVM) as a generalization of the real and complex-valued support vector machines using the Clifford geometric algebra. In this framework, we handle the design of kernels involving the Clifford or geometric product. In this approach, one redefines the optimization variables as multivectors. This allows us to have a multivector as output. Therefore, we can represent multiple classes according to the dimension of the geometric algebra in which we work. We show that one can apply CSVM for classification and regression and also to build a recurrent CSVM. The CSVM is an attractive approach for the multiple input multiple output processing of high-dimensional geometric entities. We carried out comparisons between CSVM and the current approaches to solve multiclass classification and regression. We also study the performance of the recurrent CSVM with experiments involving time series. The authors believe that this paper can be of great use for researchers and practitioners interested in multiclass hypercomplex computing, particularly for applications in complex and quaternion signal and image processing, satellite control, neurocomputation, pattern recognition, computer vision, augmented virtual reality, robotics, and humanoids. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theoretical Model for Mesoscopic-Level Scale-Free Self-Organization of Functional Brain Networks

    Publication Year: 2010 , Page(s): 1747 - 1758
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1934 KB) |  | HTML iconHTML  

    In this paper, we provide theoretical and numerical analysis of a geometric activity flow network model which is aimed at explaining mathematically the scale-free functional graph self-organization phenomena emerging in complex nervous systems at a mesoscale level. In our model, each unit corresponds to a large number of neurons and may be roughly seen as abstracting the functional behavior exhibited by a single voxel under functional magnetic resonance imaging (fMRI). In the course of the dynamics, the units exchange portions of formal charge, which correspond to waves of activity in the underlying microscale neuronal circuit. The geometric model abstracts away the neuronal complexity and is mathematically tractable, which allows us to establish explicit results on its ground states and the resulting charge transfer graph modeling functional graph of the network. We show that, for a wide choice of parameters and geometrical setups, our model yields a scale-free functional connectivity with the exponent approaching 2, which is in agreement with previous empirical studies based on fMRI. The level of universality of the presented theory allows us to claim that the model does shed light on mesoscale functional self-organization phenomena of the nervous system, even without resorting to closer details of brain connectivity geometry which often remain unknown. The material presented here significantly extends our previous work where a simplified mean-field model in a similar spirit was constructed, ignoring the underlying network geometry. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Approach for the Identification and Validation of a Nonlinear F/A-18 Model by Use of Neural Networks

    Publication Year: 2010 , Page(s): 1759 - 1765
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1109 KB) |  | HTML iconHTML  

    This paper presents a new approach for identifying and validating the F/A-18 aeroservoelastic model, based on flight flutter tests. The neural network (NN), trained with five different flight flutter cases, is validated using 11 other flight flutter test (FFT) data. A total of 16 FFT cases were obtained for all three flight regimes (subsonic, transonic, and supersonic) at Mach numbers ranging between 0.85 and 1.30 and at altitudes of between 5000 and 25 000 ft. The results obtained highlight the efficiency of the multilayer perceptron NN in model identification. Optimization of the NN requires mixing of two proprieties: the hidden layer size reduction and four-layered NN performances. This paper shows that a four-layer NN with only 16 neurons is enough to create an accurate model. The fit coefficients were higher than 92% for both the identification and the validation test data, thus demonstrating accuracy of the NN. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-Organizing MultiLayer Perceptron

    Publication Year: 2010 , Page(s): 1766 - 1779
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (710 KB) |  | HTML iconHTML  

    In this paper, we propose an extension of a self-organizing map called self-organizing multilayer perceptron (SOMLP) whose purpose is to achieve quantization of spaces of functions. Based on the use of multilayer perceptron networks, SOMLP comprises the unsupervised as well as supervised learning algorithms. We demonstrate that it is possible to use the commonly used vector quantization algorithms (LVQ algorithms) to build new algorithms called functional quantization algorithms (LFQ algorithms). The SOMLP can be used to model nonlinear and/or nonstationary complex dynamic processes, such as speech signals. While most of the functional data analysis (FDA) research is based on B-spline or similar univariate functions, the SOMLP algorithm allows quantization of function with high dimensional input space. As a consequence, classical FDA methods can be outperformed by increasing the dimensionality of the input space of the functions under analysis. Experiments on artificial and real world examples are presented which illustrate the potential of this approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-Performance Reconfigurable Hardware Architecture for Restricted Boltzmann Machines

    Publication Year: 2010 , Page(s): 1780 - 1792
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (779 KB) |  | HTML iconHTML  

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural Network Learning Without Backpropagation

    Publication Year: 2010 , Page(s): 1793 - 1803
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1103 KB) |  | HTML iconHTML  

    The method introduced in this paper allows for training arbitrarily connected neural networks, therefore, more powerful neural network architectures with connections across layers can be efficiently trained. The proposed method also simplifies neural network training, by using the forward-only computation instead of the traditionally used forward and backward computation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation-Based Adaptive Tracking Control of Pure-Feedback Nonlinear Systems With Multiple Unknown Time-Varying Delays

    Publication Year: 2010 , Page(s): 1804 - 1816
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (969 KB) |  | HTML iconHTML  

    This paper presents adaptive neural tracking control for a class of non-affine pure-feedback systems with multiple unknown state time-varying delays. To overcome the design difficulty from non-affine structure of pure-feedback system, mean value theorem is exploited to deduce affine appearance of state variables as virtual controls , and of the actual control . The separation technique is introduced to decompose unknown functions of all time-varying delayed states into a series of continuous functions of each delayed state. The novel Lyapunov-Krasovskii functionals are employed to compensate for the unknown functions of current delayed state, which is effectively free from any restriction on unknown time-delay functions and overcomes the circular construction of controller caused by the neural approximation of a function of and . Novel continuous functions are introduced to overcome the design difficulty deduced from the use of one adaptive parameter. To achieve uniformly ultimate boundedness of all the signals in the closed-loop system and tracking performance, control gains are effectively modified as a dynamic form with a class of even function, which makes stability analysis be carried out at the present of multiple time-varying delays. Simulation studies are provided to demonstrate the effectiveness of the proposed scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SWAT: A Spiking Neural Network Training Algorithm for Classification Problems

    Publication Year: 2010 , Page(s): 1817 - 1830
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (917 KB) |  | HTML iconHTML  

    This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP/BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5% and 96.2% for the Iris and Wisconsin training sets, respectively, and 95.3% and 96.7% for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875% for training and 95.25% for testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semisupervised Kernel Matrix Learning by Kernel Propagation

    Publication Year: 2010 , Page(s): 1831 - 1841
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB) |  | HTML iconHTML  

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set from the full sample set ; 2) learn a seed-kernel matrix on through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Passivity Analysis for Neural Networks With Discrete and Distributed Delays

    Publication Year: 2010 , Page(s): 1842 - 1847
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (181 KB) |  | HTML iconHTML  

    In this brief, the problem of passivity analysis is investigated for a class of uncertain neural networks (NNs) with both discrete and distributed time-varying delays. By constructing a novel Lyapunov functional and utilizing some advanced techniques, new delay-dependent passivity criteria are established to guarantee the passivity performance of NNs. Essentially different from the available results, when estimating the upper bound of the derivative of Lyapunov functionals, we consider and best utilize the additional useful terms about the distributed delays, which leads to less conservative results. These criteria are expressed in the form of convex optimization problems, which can be efficiently solved via standard numerical software. Numerical examples are provided to illustrate the effectiveness and less conservatism of the proposed results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tensor Distance Based Multilinear Locality-Preserved Maximum Information Embedding

    Publication Year: 2010 , Page(s): 1848 - 1854
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    This brief paper presents a unified framework for tensor-based dimensionality reduction (DR) with a new tensor distance (TD) metric and a novel multilinear locality-preserved maximum information embedding (MLPMIE) algorithm. Different from traditional Euclidean distance, which is constrained by the orthogonality assumption, TD measures the distance between data points by considering the relationships among different coordinates. To preserve the natural tensor structure in low-dimensional space, MLPMIE directly works on the high-order form of input data and iteratively learns the transformation matrices. In order to preserve the local geometry and to maximize the global discrimination simultaneously, MLPMIE keeps both local and global structures in a manifold model. By integrating TD into tensor embedding, TD-MLPMIE performs tensor-based DR through the whole learning procedure, and achieves stable performance improvement on various standard datasets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special issue on data-based optimization, control and modeling

    Publication Year: 2010 , Page(s): 1855
    Save to Project icon | Request Permissions | PDF file iconPDF (118 KB)  
    Freely Available from IEEE
  • Access over 1 million articles - The IEEE Digital Library [advertisement]

    Publication Year: 2010 , Page(s): 1856
    Save to Project icon | Request Permissions | PDF file iconPDF (370 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2010 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks Information for authors

    Publication Year: 2010 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope