By Topic

Neural Networks, IEEE Transactions on

Issue 7 • Date July 2011

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (110 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Growing Hierarchical Probabilistic Self-Organizing Graphs

    Page(s): 997 - 1008
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1007 KB) |  | HTML iconHTML  

    Since the introduction of the growing hierarchical self-organizing map, much work has been done on self-organizing neural models with a dynamic structure. These models allow adjusting the layers of the model to the features of the input dataset. Here we propose a new self-organizing model which is based on a probabilistic mixture of multivariate Gaussian components. The learning rule is derived from the stochastic approximation framework, and a probabilistic criterion is used to control the growth of the model. Moreover, the model is able to adapt to the topology of each layer, so that a hierarchy of dynamic graphs is built. This overcomes the limitations of the self-organizing maps with a fixed topology, and gives rise to a faithful visualization method for high-dimensional data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cluster Synchronization in Directed Networks Via Intermittent Pinning Control

    Page(s): 1009 - 1020
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (898 KB) |  | HTML iconHTML  

    In this paper, we investigate the cluster synchronization problem for linearly coupled networks, which can be recurrently connected neural networks, cellular neural networks, Hodgkin-Huxley models, Lorenz chaotic oscillators, etc., by adding some simple intermittent pinning controls. We assume the nodes in the network to be identical and the coupling matrix to be asymmetric. Some sufficient conditions to guarantee global cluster synchronization are presented. Furthermore, a centralized adaptive intermittent control is introduced and theoretical analysis is provided. Then, by applying the adaptive approach on the diagonal submatrices of the asymmetric coupling matrix, we also get the corresponding cluster synchronization result. Finally, numerical simulations are given to verify the theoretical results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selectable and Unselectable Sets of Neurons in Recurrent Neural Networks With Saturated Piecewise Linear Transfer Function

    Page(s): 1021 - 1031
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    The concepts of selectable and unselectable sets are proposed to describe some interesting dynamical properties of a class of recurrent neural networks (RNNs) with saturated piecewise linear transfer function. A set of neurons is said to be selectable if it can be co-unsaturated at a stable equilibrium point by some external input. A set of neurons is said to be unselectable if it is not selectable, i.e., such set of neurons can never be co-unsaturated at any stable equilibrium point regardless of what the input is. The importance of such concepts is that they enable a new perspective of the memory in RNNs. Necessary and sufficient conditions for the existence of selectable and unselectable sets of neurons are obtained. As an application, the problem of group selection is discussed by using such concepts. It shows that, under some conditions, each group is a selectable set, and each selectable set is contained in some group. Thus, groups are indicated by selectable sets of the RNNs and can be selected by external inputs. Simulations are carried out to further illustrate the theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LMI-Based Approach for Global Asymptotic Stability Analysis of Recurrent Neural Networks with Various Delays and Structures

    Page(s): 1032 - 1045
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (343 KB) |  | HTML iconHTML  

    Global asymptotic stability problem is studied for a class of recurrent neural networks with distributed delays satisfying Lebesgue-Stieljies measures on the basis of linear matrix inequality. The concerned network model includes many neural network models with various delays and structures as its special cases, such as the delays covering the discrete delays and distributed delays, and the network structures containing the neutral-type networks and high-order networks. Therefore, many new stability criteria for the above neural network models have also been derived from the present stability analysis method. All the obtained stability results have similar matrix inequality structures and can be easily checked. Three numerical examples are used to show the effectiveness of the obtained results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and Compensation of the Effects of Analog VLSI Arithmetic on the LMS Algorithm

    Page(s): 1046 - 1060
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (930 KB) |  | HTML iconHTML  

    Analog very large scale integration implementations of neural networks can compute using a fraction of the size and power required by their digital counterparts. However, intrinsic limitations of analog hardware, such as device mismatch, charge leakage, and noise, reduce the accuracy of analog arithmetic circuits, degrading the performance of large-scale adaptive systems. In this paper, we present a detailed mathematical analysis that relates different parameters of the hardware limitations to specific effects on the convergence properties of linear perceptrons trained with the least-mean-square (LMS) algorithm. Using this analysis, we derive design guidelines and introduce simple on-chip calibration techniques to improve the accuracy of analog neural networks with a small cost in die area and power dissipation. We validate our analysis by evaluating the performance of a mixed-signal complementary metal-oxide-semiconductor implementation of a 32-input perceptron trained with LMS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Sequential Learning Algorithm for Complex-Valued Self-Regulating Resource Allocation Network-CSRAN

    Page(s): 1061 - 1072
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (646 KB) |  | HTML iconHTML  

    This paper presents a sequential learning algorithm for a complex-valued resource allocation network with a self-regulating scheme, referred to as complex-valued self-regulating resource allocation network (CSRAN). The self-regulating scheme in CSRAN decides what to learn, when to learn, and how to learn based on the information present in the training samples. CSRAN is a complex-valued radial basis function network with a sech activation function in the hidden layer. The network parameters are updated using a complex-valued extended Kalman filter algorithm. CSRAN starts with no hidden neuron and builds up an appropriate number of hidden neurons, resulting in a compact structure. Performance of the CSRAN is evaluated using a synthetic complex-valued function approximation problem, two real-world applications consisting of a complex quadrature amplitude modulation channel equalization, and an adaptive beam-forming problem. Since complex-valued neural networks are good decision makers, the decision-making ability of the CSRAN is compared with other complex-valued classifiers and the best performing real-valued classifier using two benchmark unbalanced classification problems from UCI machine learning repository. The approximation and classification results show that the CSRAN outperforms other existing complex-valued learning algorithms available in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Network Decentralized Backstepping Output-Feedback Control for Nonlinear Large-Scale Systems With Time Delays

    Page(s): 1073 - 1086
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (563 KB) |  | HTML iconHTML  

    In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of “explosion of complexity” inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sparse Neural Networks With Large Learning Diversity

    Page(s): 1087 - 1096
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB) |  | HTML iconHTML  

    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages that are much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Phase Synchronization Motion and Neural Coding in Dynamic Transmission of Neural Information

    Page(s): 1097 - 1106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (883 KB) |  | HTML iconHTML  

    In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative Algorithm for Joint Zero Diagonalization With Application in Blind Source Separation

    Page(s): 1107 - 1118
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (715 KB) |  | HTML iconHTML  

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Local Linear Discriminant Analysis Framework Using Sample Neighbors

    Page(s): 1119 - 1132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1055 KB) |  | HTML iconHTML  

    The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Learning and Control for MIMO System Based on Adaptive Dynamic Programming

    Page(s): 1133 - 1148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (669 KB) |  | HTML iconHTML  

    Adaptive dynamic programming (ADP) is a promising research field for design of intelligent controllers, which can both learn on-the-fly and exhibit optimal behavior. Over the past decades, several generations of ADP design have been proposed in the literature, which have demonstrated many successful applications in various benchmarks and industrial applications. While many of the existing researches focus on multiple-inputs-single-output system with steepest descent search, in this paper we investigate a generalized multiple-input-multiple-output (GMIMO) ADP design for online learning and control, which is more applicable to a wide range of practical real-world applications. Furthermore, an improved weight-updating algorithm based on recursive Levenberg-Marquardt methods is presented and embodied in the GMIMO approach to improve its performance. Finally, we test the performance of this approach based on a practical complex system, namely, the learning and control of the tension and height of the looper system in a hot strip mill. Experimental results demonstrate that the proposed approach can achieve effective and robust performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral Clustering on Multiple Manifolds

    Page(s): 1149 - 1161
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1061 KB) |  | HTML iconHTML  

    Spectral clustering (SC) is a large family of grouping methods that partition data using eigenvectors of an affinity matrix derived from the data. Though SC methods have been successfully applied to a large number of challenging clustering scenarios, it is noteworthy that they will fail when there are significant intersections among different clusters. In this paper, based on the analysis that SC methods are able to work well when the affinity values of the points belonging to different clusters are relatively low, we propose a new method, called spectral multi-manifold clustering (SMMC), which is able to handle intersections. In our model, the data are assumed to lie on or close to multiple smooth low-dimensional manifolds, where some data manifolds are separated but some are intersecting. Then, local geometric information of the sampled data is incorporated to construct a suitable affinity matrix. Finally, spectral method is applied to this affinity matrix to group the data. Extensive experiments on synthetic as well as real datasets demonstrate the promising performance of SMMC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Output Feedback Tracking Control for a Class of Uncertain Discrete-Time Nonlinear Systems

    Page(s): 1162 - 1167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (246 KB) |  | HTML iconHTML  

    This brief studies an adaptive neural output feedback tracking control of uncertain nonlinear multi-input-multi-output (MIMO) systems in the discrete-time form. The considered MIMO systems are composed of n subsystems with the couplings of inputs and states among subsystems. In order to solve the noncausal problem and decouple the couplings, it needs to transform the systems into a predictor form. The higher order neural networks are utilized to approximate the desired controllers. By using Lyapunov analysis, it is proven that all the signals in the closed-loop system is the semi-globally uniformly ultimately bounded and the output errors converge to a compact set. In contrast to the existing results, the advantage of the scheme is that the number of the adjustable parameters is highly reduced. The effectiveness of the scheme is verified by a simulation example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology

    Page(s): 1168
    Save to Project icon | Request Permissions | PDF file iconPDF (799 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks Information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope