By Topic

Neural Networks and Learning Systems, IEEE Transactions on

Issue 9 • Date Sept. 2012

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (116 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks and Learning Systems publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • Adaptive Pinning Control of Deteriorated Nonlinear Coupling Networks With Circuit Realization

    Page(s): 1345 - 1355
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3524 KB) |  | HTML iconHTML  

    This paper deals with a class of complex networks with nonideal coupling networks, and addresses the problem of asymptotic synchronization of the complex network through designing adaptive pinning control and coupling adjustment strategies. A more general coupled nonlinearity is considered as perturbations of the network, while a serious faulty network named deteriorated network is also proposed to be further study. For the sake of eliminating these adverse impacts for synchronization, indirect adaptive schemes are designed to construct controllers and adjusters on pinned nodes and nonuniform couplings of un-pinned nodes, respectively. According to Lyapunov stability theory, the proposed adaptive strategies are successful in ensuring the achievement of asymptotic synchronization of the complex network even in the presence of perturbed and deteriorated networks. The proposed schemes are physically implemented by circuitries and tested by simulation on a Chua's circuit network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximate Solutions to Ordinary Differential Equations Using Least Squares Support Vector Machines

    Page(s): 1356 - 1367
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1112 KB) |  | HTML iconHTML  

    In this paper, a new approach based on least squares support vector machines (LS-SVMs) is proposed for solving linear and nonlinear ordinary differential equations (ODEs). The approximate solution is presented in closed form by means of LS-SVMs, whose parameters are adjusted to minimize an appropriate error function. For the linear and nonlinear cases, these parameters are obtained by solving a system of linear and nonlinear equations, respectively. The method is well suited to solving mildly stiff, nonstiff, and singular ODEs with initial and boundary conditions. Numerical results demonstrate the efficiency of the proposed method over existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exponential Synchronization of Neural Networks With Discrete and Distributed Delays Under Time-Varying Sampling

    Page(s): 1368 - 1376
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1566 KB) |  | HTML iconHTML  

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence and Rate Analysis of Neural Networks for Sparse Approximation

    Page(s): 1377 - 1389
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (537 KB) |  | HTML iconHTML  

    We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • In-Sample and Out-of-Sample Model Selection and Error Estimation for Support Vector Machines

    Page(s): 1390 - 1406
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (582 KB) |  | HTML iconHTML  

    In-sample approaches to model selection and error estimation of support vector machines (SVMs) are not as widespread as out-of-sample methods, where part of the data is removed from the training set for validation and testing purposes, mainly because their practical application is not straightforward and the latter provide, in many cases, satisfactory results. In this paper, we survey some recent and not-so-recent results of the data-dependent structural risk minimization framework and propose a proper reformulation of the SVM learning algorithm, so that the in-sample approach can be effectively applied. The experiments, performed both on simulated and real-world datasets, show that our in-sample approach can be favorably compared to out-of-sample methods, especially in cases where the latter ones provide questionable results. In particular, when the number of samples is small compared to their dimensionality, like in classification of microarray data, our proposal can outperform conventional out-of-sample approaches such as the cross validation, the leave-one-out, or the Bootstrap methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Exponential Stability of Uncertain Stochastic Neural Networks With Distributed Delays and Reaction-Diffusions

    Page(s): 1407 - 1416
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1130 KB) |  | HTML iconHTML  

    This paper considers the problem of stability analysis for uncertain stochastic neural networks with distributed delays and reaction-diffusions. Two sufficient conditions for the robust exponential stability in the mean square of the given network are developed by using a Lyapunov-Krasovskii functional, an integral inequality, and some analysis techniques. The conditions, which are expressed by linear matrix inequalities, can be easily checked. Two simulation examples are given to demonstrate the reduced conservatism of the proposed conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online Kernel-Based Learning for Task-Space Tracking Robot Control

    Page(s): 1417 - 1425
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (534 KB) |  | HTML iconHTML  

    Task-space control of redundant robot systems based on analytical models is known to be susceptive to modeling errors. Data-driven model learning methods may present an interesting alternative approach. However, learning models for task-space tracking control from sampled data is an ill-posed problem. In particular, the same input data point can yield many different output values, which can form a nonconvex solution space. Because the problem is ill-posed, models cannot be learned from such data using common regression methods. While learning of task-space control mappings is globally ill-posed, it has been shown in recent work that it is locally a well-defined problem. In this paper, we use this insight to formulate a local kernel-based learning approach for online model learning for task-space tracking control. We propose a parametrization for the local model, which makes an application in task-space tracking control of redundant robots possible. The model parametrization further allows us to apply the kernel-trick and, therefore, enables a formulation within the kernel learning framework. In our evaluations, we show the ability of the method for online model learning for task-space tracking control of redundant robots. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Memristor Bridge Synapse-Based Neural Network and Its Learning

    Page(s): 1426 - 1435
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1308 KB) |  | HTML iconHTML  

    Analog hardware architecture of a memristor bridge synapse-based multilayer neural network and its learning scheme is proposed. The use of memristor bridge synapse in the proposed architecture solves one of the major problems, regarding nonvolatile weight storage in analog neural network implementations. To compensate for the spatial nonuniformity and nonideal response of the memristor bridge synapse, a modified chip-in-the-loop learning scheme suitable for the proposed neural network architecture is also proposed. In the proposed method, the initial learning is conducted in software, and the behavior of the software-trained network is learned by the hardware network by learning each of the single-layered neurons of the network independently. The forward calculation of the single-layered neuron learning is implemented on circuit hardware, and followed by a weight updating phase assisted by a host computer. Unlike conventional chip-in-the-loop learning, the need for the readout of synaptic weights for calculating weight updates in each epoch is eliminated by virtue of the memristor bridge synapse and the proposed learning scheme. The hardware architecture along with the successful implementation of proposed learning on a three-bit parity network, and on a car detection network is also presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Sparse Modeling With Automatic Feature Grouping

    Page(s): 1436 - 1447
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1932 KB) |  | HTML iconHTML  

    For high-dimensional data, it is often desirable to group similar features together during the learning process. This can reduce the estimation variance and improve the stability of feature selection, leading to better generalization. Moreover, it can also help in understanding and interpreting data. Octagonal shrinkage and clustering algorithm for regression (OSCAR) is a recent sparse-modeling approach that uses a l1 -regularizer and a pairwise l-regularizer on the feature coefficients to encourage such feature grouping. However, computationally, its optimization procedure is very expensive. In this paper, we propose an efficient solver based on the accelerated gradient method. We show that its key proximal step can be solved by a highly efficient simple iterative group merging algorithm. Given d input features, this reduces the empirical time complexity from O(d2 ~ d5) for the existing solvers to just O(d). Experimental results on a number of toy and real-world datasets demonstrate that OSCAR is a competitive sparse-modeling approach, but with the added ability of automatic feature grouping. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Approach for Multiscale Support Vector Regression

    Page(s): 1448 - 1460
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1349 KB) |  | HTML iconHTML  

    Support vector regression (SVR) is based on a linear combination of displaced replicas of the same function, called a kernel. When the function to be approximated is nonstationary, the single kernel approach may be ineffective, as it is not able to follow the variations in the frequency content in the different regions of the input space. The hierarchical support vector regression (HSVR) model presented here aims to provide a good solution also in these cases. HSVR consists of a set of hierarchical layers, each containing a standard SVR with Gaussian kernel at a given scale. Decreasing the scale layer by layer, details are incorporated inside the regression function. HSVR has been widely applied to noisy synthetic and real datasets and it has shown the ability in denoising the original data, obtaining an effective multiscale reconstruction of better quality than that obtained by standard SVR. Results also compare favorably with multikernel approaches. Furthermore, tuning the SVR configuration parameters is strongly simplified in the HSVR model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discretized-Vapnik-Chervonenkis Dimension for Analyzing Complexity of Real Function Classes

    Page(s): 1461 - 1472
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB) |  | HTML iconHTML  

    In this paper, we introduce the discretized-Vapnik-Chervonenkis (VC) dimension for studying the complexity of a real function class, and then analyze properties of real function classes and neural networks. We first prove that a countable traversal set is enough to achieve the VC dimension for a real function class, whereas its classical definition states that the traversal set is the output range of the function class. Based on this result, we propose the discretized-VC dimension defined by using a countable traversal set consisting of rational numbers in the range of a real function class. By using the discretized-VC dimension, we show that if a real function class has a finite VC dimension, only a finite traversal set is needed to achieve the VC dimension. We then point out that the real function classes, which have the infinite VC dimension, can be grouped into two categories: TYPE-A and TYPE-B. Subsequently, based on the obtained results, we discuss the relationship between the VC dimension of an indicator-output network and that of the real-output network, when both networks have the same structure except for the output activation functions. Finally, we present the risk bound based on the discretized-VC dimension for a real function class that has infinite VC dimension and is of TYPE-A. We prove that, with such a function class, the empirical risk minimization (ERM) principle for the function class is still consistent with overwhelming probability. This is a development of the existing knowledge that the ERM learning is consistent if and only if the function class has a finite VC dimension. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Limit Set Dichotomy and Multistability for a Class of Cooperative Neural Networks With Delays

    Page(s): 1473 - 1485
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (482 KB) |  | HTML iconHTML  

    Recent papers have pointed out the interest to study convergence in the presence of multiple equilibrium points (EPs) (multistability) for neural networks (NNs) with nonsymmetric cooperative (nonnegative) interconnections and neuron activations modeled by piecewise linear (PL) functions. One basic difficulty is that the semiflows generated by such NNs are monotone but, due to the horizontal segments in the PL functions, are not eventually strongly monotone (ESM). This notwithstanding, it has been shown that there are subclasses of irreducible interconnection matrices for which the semiflows, although they are not ESM, enjoy convergence properties similar to those of ESM semiflows. The results obtained so far concern the case of cooperative NNs without delays. The goal of this paper is to extend some of the existing results to the relevant case of NNs with delays. More specifically, this paper considers a class of NNs with PL neuron activations, concentrated delays, and a nonsymmetric cooperative interconnection matrix A and delay interconnection matrix Aτ. The main result is that when A+Aτ satisfies a full interconnection condition, then the generated semiflows, which are monotone but not ESM, satisfy a limit set dichotomy analogous to that valid for ESM semiflows. It follows that there is an open and dense set of initial conditions, in the state space of continuous functions on a compact interval, for which the solutions converge toward an EP. The result holds in the general case where the NNs possess multiple EPs, i.e., is a result on multistability, and is valid for any constant value of the delays. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Visual and Auditory Map Alignment in Barn Owl Superior Colliculus and Its Neuromorphic Implementation

    Page(s): 1486 - 1497
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1452 KB) |  | HTML iconHTML  

    Adaptation is one of the most important phenomena in biology. A young barn owl can adapt to imposed environmental changes, such as artificial visual distortion caused by wearing a prism. This adjustment process has been modeled mathematically and the model replicates the sensory map realignment of barn owl superior colliculus (SC) through axonogenesis and synaptogenesis. This allows the biological mechanism to be transferred to an artificial computing system and thereby imbue it with a new form of adaptability to the environment. The model is demonstrated in a real-time robot environment. Results of the experiments are compared with and without prism distortion of vision, and show improved adaptability for the robot. However, the computation speed of the embedded system in the robot is slow. A digital and analog mixed signal very-large-scale integration (VLSI) circuit has been fabricated to implement adaptive sensory pathway changes derived from the SC model at higher speed. VLSI experimental results are consistent with simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bidirectional Extreme Learning Machine for Regression Problem and Its Learning Effectiveness

    Page(s): 1498 - 1505
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (858 KB) |  | HTML iconHTML  

    It is clear that the learning effectiveness and learning speed of neural networks are in general far slower than required, which has been a major bottleneck for many applications. Recently, a simple and efficient learning method, referred to as extreme learning machine (ELM), was proposed by Huang , which has shown that, compared to some conventional methods, the training time of neural networks can be reduced by a thousand times. However, one of the open problems in ELM research is whether the number of hidden nodes can be further reduced without affecting learning effectiveness. This brief proposes a new learning algorithm, called bidirectional extreme learning machine (B-ELM), in which some hidden nodes are not randomly selected. In theory, this algorithm tends to reduce network output error to 0 at an extremely early learning stage. Furthermore, we find a relationship between the network output error and the network output weights in the proposed B-ELM. Simulation results demonstrate that the proposed method can be tens to hundreds of times faster than other incremental ELM algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing Weak Signal Transmission Through a Feedforward Network

    Page(s): 1506 - 1512
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (803 KB) |  | HTML iconHTML  

    The ability to transmit and amplify weak signals is fundamental to signal processing of artificial devices in engineering. Using a multilayer feedforward network of coupled double-well oscillators as well as Fitzhugh-Nagumo oscillators, we here investigate the conditions under which a weak signal received by the first layer can be transmitted through the network with or without amplitude attenuation. We find that the coupling strength and the nodes' states of the first layer act as two-state switches, which determine whether the transmission is significantly enhanced or exponentially decreased. We hope this finding is useful for designing artificial signal amplifiers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Derong Liu
Institute of Automation
Chinese Academy of Sciences