By Topic

Neural Networks, IEEE Transactions on

Issue 9 • Date Sept. 2008

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Data Visualization and Dimensionality Reduction Using Kernel Maps With a Reference Point

    Page(s): 1501 - 1517
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2301 KB) |  | HTML iconHTML  

    In this paper, a new kernel-based method for data visualization and dimensionality reduction is proposed. A reference point is considered corresponding to additional constraints taken in the problem formulation. In contrast with the class of kernel eigenmap methods, the solution (coordinates in the low-dimensional space) is characterized by a linear system instead of an eigenvalue problem. The kernel maps with a reference point are generated from a least squares support vector machine (LS-SVM) core part that is extended with an additional regularization term for preserving local mutual distances together with reference point constraints. The kernel maps possess primal and dual model representations and provide out-of-sample extensions, e.g., for validation-based tuning. The method is illustrated on toy problems and real-life data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decision Manifolds—A Supervised Learning Algorithm Based on Self-Organization

    Page(s): 1518 - 1530
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1358 KB) |  | HTML iconHTML  

    In this paper, we present a neural classifier algorithm that locally approximates the decision surface of labeled data by a patchwork of separating hyperplanes, which are arranged under certain topological constraints similar to those of self-organizing maps (SOMs). We take advantage of the fact that these boundaries can often be represented by linear ones connected by a low-dimensional nonlinear manifold, thus influencing the placement of the separators. The resulting classifier allows for a voting scheme that averages over the classification results of neighboring hyper- planes. Our algorithm is computationally efficient both in terms of training and classification. Further, we present a model selection method to estimate the topology of the classification boundary. We demonstrate the algorithm's usefulness on several artificial and real-world data sets and compare it to the state-of-the-art supervised learning algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Multiobjective Evolutionary Design for Artificial Neural Networks

    Page(s): 1531 - 1548
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1899 KB) |  | HTML iconHTML  

    Evolutionary algorithms are a class of stochastic search methods that attempts to emulate the biological process of evolution, incorporating concepts of selection, reproduction, and mutation. In recent years, there has been an increase in the use of evolutionary approaches in the training of artificial neural networks (ANNs). While evolutionary techniques for neural networks have shown to provide superior performance over conventional training approaches, the simultaneous optimization of network performance and architecture will almost always result in a slow training process due to the added algorithmic complexity. In this paper, we present a geometrical measure based on the singular value decomposition (SVD) to estimate the necessary number of neurons to be used in training a single-hidden-layer feedforward neural network (SLFN). In addition, we develop a new hybrid multiobjective evolutionary approach that includes the features of a variable length representation that allow for easy adaptation of neural networks structures, an architectural recombination procedure based on the geometrical measure that adapts the number of necessary hidden neurons and facilitates the exchange of neuronal information between candidate designs, and a microhybrid genetic algorithm (muHGA) with an adaptive local search intensity scheme for local fine-tuning. In addition, the performances of well-known algorithms as well as the effectiveness and contributions of the proposed approach are analyzed and validated through a variety of data set types. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Neural-Network-Based Model for the Dynamic Simulation of the Tire/Suspension System While Traversing Road Irregularities

    Page(s): 1549 - 1563
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1846 KB) |  | HTML iconHTML  

    This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of the Initial Values in Split-Complex Backpropagation Algorithm

    Page(s): 1564 - 1573
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB) |  | HTML iconHTML  

    When a multilayer perceptron (MLP) is trained with the split-complex backpropagation (SCBP) algorithm, one observes a relatively strong dependence of the performance on the initial values. For the effective adjustments of the weights and biases in SCBP, we propose that the range of the initial values should be greater than that of the adjustment quantities. This criterion can reduce the misadjustment of the weights and biases. Based on the this criterion, the suitable range of the initial values can be estimated. The results show that the suitable range of the initial values depends on the property of the used communication channel and the structure of the MLP (the number of layers and the number of nodes in each layer). The results are studied using the equalizer scenarios. The simulation results show that the estimated range of the initial values gives significantly improved performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Instance-Based Algorithm With Auxiliary Similarity Information for the Estimation of Gait Kinematics From Wearable Sensors

    Page(s): 1574 - 1582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1019 KB) |  | HTML iconHTML  

    Wearable human movement measurement systems are increasingly popular as a means of capturing human movement data in real-world situations. Previous work has attempted to estimate segment kinematics during walking from foot acceleration and angular velocity data. In this paper, we propose a novel neural network [GRNN with Auxiliary Similarity Information (GASI)] that estimates joint kinematics by taking account of proximity and gait trajectory slope information through adaptive weighting. Furthermore, multiple kernel bandwidth parameters are used that can adapt to the local data density. To demonstrate the value of the GASI algorithm, hip, knee, and ankle joint motions are estimated from acceleration and angular velocity data for the foot and shank, collected using commercially available wearable sensors. Reference hip, knee, and ankle kinematic data were obtained using externally mounted reflective markers and infrared cameras for subjects while they walked at different speeds. The results provide further evidence that a neural net approach to the estimation of joint kinematics is feasible and shows promise, but other practical issues must be addressed before this approach is mature enough for clinical implementation. Furthermore, they demonstrate the utility of the new GASI algorithm for making estimates from continuous periodic data that include noise and a significant level of variability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kernel Component Analysis Using an Epsilon-Insensitive Robust Loss Function

    Page(s): 1583 - 1598
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2555 KB) |  | HTML iconHTML  

    Kernel principal component analysis (PCA) is a technique to perform feature extraction in a high-dimensional feature space, which is nonlinearly related to the original input space. The kernel PCA formulation corresponds to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalues correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA, we extend it to a generalized form of kernel component analysis (KCA) with a general underlying loss function made explicit. For classical kernel PCA, the underlying loss function is L 2 . In this generalized form, one can plug in also other loss functions. In the context of robust statistics, it is known that the L 2 loss function is not robust because its influence function is not bounded. Therefore, outliers can skew the solution from the desired one. Another issue with kernel PCA is the lack of sparseness: the principal components are dense expansions in terms of kernel functions. In this paper, we introduce robustness and sparseness into kernel component analysis by using an epsilon-insensitive robust loss function. We propose two different algorithms. The first method solves a set of nonlinear equations with kernel PCA as starting points. The second method uses a simplified iterative weighting procedure that leads to solving a sequence of generalized eigenvalue problems. Simulations with toy and real-life data show improvements in terms of robustness together with a sparse representation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Predictive Control Using Neural Network for a Class of Pure-Feedback Systems in Discrete Time

    Page(s): 1599 - 1614
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (982 KB) |  | HTML iconHTML  

    In this paper, adaptive neural network (NN) control is investigated for a class of nonlinear pure-feedback discrete-time systems. By using prediction functions of future states, the pure-feedback system is transformed into an n-step-ahead predictor, based on which state feedback NN control is synthesized. Next, by investigating the relationship between outputs and states, the system is transformed into an input-output predictor model, and then, output feedback control is constructed. To overcome the difficulty of nonaffine appearance of the control input, implicit function theorem is exploited in the control design and NN is employed to approximate the unknown function in the control. In both state feedback and output feedback control, only a single NN is used and the controller singularity is completely avoided. The closed-loop system achieves semiglobal uniform ultimate boundedness (SGUUB) stability and the output tracking error is made within a neighborhood around zero. Simulation results are presented to show the effectiveness of the proposed control approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feedback-Linearization-Based Neural Adaptive Control for Unknown Nonaffine Nonlinear Discrete-Time Systems

    Page(s): 1615 - 1625
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB) |  | HTML iconHTML  

    A new feedback-linearization-based neural network (NN) adaptive control is proposed for unknown nonaffine nonlinear discrete-time systems. An equivalent model in afflne-like form is first derived for the original nonaffine discrete-time systems as feedback linearization methods cannot be implemented for such systems. Then, feedback linearization adaptive control is implemented based on the affine-like equivalent model identified with neural networks. Pretraining is not required and the weights of the neural networks used in adaptive control are directly updated online based on the input-output measurement. The dead-zone technique is used to remove the requirement of persistence excitation during the adaptation. With the proposed neural network adaptive control, stability and performance of the closed-loop system are rigorously established. Illustrated examples are provided to validate the theoretical findings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Training Spiking Neuronal Networks With Applications in Engineering Tasks

    Page(s): 1626 - 1640
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2337 KB) |  | HTML iconHTML  

    In this paper, spiking neuronal models employing means, variances, and correlations for computation are introduced. We present two approaches in the design of spiking neuronal networks, both of which are applied to engineering tasks. In exploring the input-output relationship of integrate-and-fire (IF) neurons with Poisson inputs, we are able to define mathematically robust learning rules, which can be applied to multilayer and time-series networks. We show through experimental applications that it is possible to train spike-rate networks on function approximation problems and on the dynamic task of robot arm control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid ART-GRNN Online Learning Neural Network With a \varepsilon -Insensitive Loss Function

    Page(s): 1641 - 1646
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    In this brief, a new neural network model called generalized adaptive resonance theory (GART) is introduced. GART is a hybrid model that comprises a modified Gaussian adaptive resonance theory (MGA) and the generalized regression neural network (GRNN). It is an enhanced version of the GRNN, which preserves the online learning properties of adaptive resonance theory (ART). A series of empirical studies to assess the effectiveness of GART in classification, regression, and time series prediction tasks is conducted. The results demonstrate that GART is able to produce good performances as compared with those of other methods, including the online sequential extreme learning machine (OSELM) and sequential learning radial basis function (RBF) neural network models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay-Dependent Stability for Recurrent Neural Networks With Time-Varying Delays

    Page(s): 1647 - 1651
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    This brief is concerned with the stability for static neural networks with time-varying delays. Delay-independent conditions are proposed to ensure the asymptotic stability of the neural network. The delay-independent conditions are less conservative than existing ones. To further reduce the conservatism, delay-dependent conditions are also derived, which can be applied to fast time-varying delays. Expressed in linear matrix inequalities, both delay-independent and delay-dependent stability conditions can be checked using the recently developed algorithms. Examples are provided to illustrate the effectiveness and the reduced conservatism of the proposed result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fast and Scalable Recurrent Neural Network Based on Stochastic Meta Descent

    Page(s): 1652 - 1658
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    This brief presents an efficient and scalable online learning algorithm for recurrent neural networks (RNNs). The approach is based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated with either its input or output links. This yields a reduced storage and computational complexity of O(N2). Stochastic meta descent (SMD), an adaptive step size scheme for stochastic gradient-descent problems, is employed as means of incorporating curvature information in order to substantially accelerate the learning process. We also introduce a clustered version of our algorithm to further improve its scalability attributes. Despite the dramatic reduction in resource requirements, it is shown through simulation results that the approach outperforms regular RTRL by almost an order of magnitude. Moreover, the scheme lends itself to parallel hardware realization by virtue of the localized property that is inherent to the learning framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symmetric Complex-Valued RBF Receiver for Multiple-Antenna-Aided Wireless Systems

    Page(s): 1659 - 1665
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (295 KB) |  | HTML iconHTML  

    A nonlinear beamforming assisted detector is proposed for multiple-antenna-aided wireless systems employing complex-valued quadrature phase shift-keying modulation. By exploiting the inherent symmetry of the optimal Bayesian detection solution, a novel complex-valued symmetric radial basis function (SRBF)-network-based detector is developed, which is capable of approaching the optimal Bayesian performance using channel-impaired training data. In the uplink case, adaptive nonlinear beamforming can be efficiently implemented by estimating the system's channel matrix based on the least squares channel estimate. Adaptive implementation of nonlinear beamforming in the downlink case by contrast is much more challenging, and we adopt a cluster-variation enhanced clustering algorithm to directly identify the SRBF center vectors required for realizing the optimal Bayesian detector. A simulation example is included to demonstrate the achievable performance improvement by the proposed adaptive nonlinear beamforming solution over the theoretical linear minimum bit error rate beamforming benchmark. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Call for Papers 2009 International Joint Conference on Neural Networks-IJCNN2009

    Page(s): 1664
    Save to Project icon | Request Permissions | PDF file iconPDF (629 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Blank page [back cover]

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (1 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope