By Topic

Neural Networks, IEEE Transactions on

Issue 5 • Date May 2008

Filter Results

Displaying Results 1 - 21 of 21
  • Table of contents

    Publication Year: 2008 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2008 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Symmetric RBF Classifier for Nonlinear Detection in Multiple-Antenna-Aided Systems

    Publication Year: 2008 , Page(s): 737 - 745
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (567 KB) |  | HTML iconHTML  

    In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the so-called "overloaded" multiple-antenna-aided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classifier is capable of approaching the optimal classification performance using noisy training data. The classifier construction process is robust to the choice of the RBF width and is computationally efficient. The proposed solution is capable of providing a signal-to-noise ratio (SNR) gain in excess of 8 dB against the powerful linear minimum bit error rate (BER) benchmark, when supporting four users with the aid of two receive antennas or seven users with four receive antenna elements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast-Learning Adaptive-Subspace Self-Organizing Map: An Application to Saliency-Based Invariant Image Feature Construction

    Publication Year: 2008 , Page(s): 746 - 757
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1130 KB) |  | HTML iconHTML  

    The adaptive-subspace self-organizing map (ASSOM) is useful for invariant feature generation and visualization. However, the learning procedure of the ASSOM is slow. In this paper, two fast implementations of the ASSOM are proposed to boost ASSOM learning based on insightful discussions of the basis rotation operator of ASSOM. We investigate the objective function approximately maximized by the classical rotation operator. We then explore a sequence of two schemes to apply the proposed ASSOM implementations to saliency-based invariant feature construction for image classification. In the first scheme, a cumulative activity map computed from a single ASSOM is used as descriptor of the input image. In the second scheme, we use one ASSOM for each image category and a joint cumulative activity map is calculated as the descriptor. Both schemes are evaluated on a subset of the Corel photo database with ten classes. The multi-ASSOM scheme is favored. It is also applied to adult image filtering and shows promising results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern Representation in Feature Extraction and Classifier Design: Matrix Versus Vector

    Publication Year: 2008 , Page(s): 758 - 769
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1833 KB) |  | HTML iconHTML  

    The matrix, as an extended pattern representation to the vector, has proven to be effective in feature extraction. However, the subsequent classifier following the matrix-pattern-oriented feature extraction is generally still based on the vector pattern representation (namely, MatFE + VecCD), where it has been demonstrated that the effectiveness in classification just attributes to the matrix representation in feature extraction. This paper looks at the possibility of applying the matrix pattern representation to both feature extraction and classifier design. To this end, we propose a so-called fully matrixized approach, i.e., the matrix-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (MatFE + MatCD). To more comprehensively validate MatFE + MatCD, we further consider all the possible combinations of feature extraction (FE) and classifier design (CD) on the basis of patterns represented by matrix and vector respectively, i.e., MatFE + MatCD, MatFE + VecCD, just the matrix-pattern-oriented classifier design (MatCD), the vector-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (VecFE + MatCD), the vector-pattern-oriented feature extraction followed by the vector-pattern-oriented classifier design (VecFE + VecCD) and just the vector-pattern-oriented classifier design (VecCD). The experiments on the combinations have shown the following: 1) the designed fully matrixized approach (MatFE + MatCD) has an effective and efficient performance on those patterns with the prior structural knowledge such as images; and 2) the matrix gives us an alternative feasible pattern representation in feature extraction and classifier designs, and meanwhile provides a necessary validation for "ugly duckling" and "no free lunch" theorems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Greatest Allowed Relative Error in Weights and Threshold of Strict Separating Systems

    Publication Year: 2008 , Page(s): 770 - 781
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    An important consideration when applying neural networks is the sensitivity to weights and threshold in strict separating systems representing a linearly separable function. Perturbations may affect weights and threshold so that it is important to estimate the maximal percentage error in weights and threshold, which may be allowed without altering the linearly separable function. In this paper, we provide the greatest allowed bound which can be associated to every strict separating system representing a linearly separable function. The proposed bound improves the tolerance that Hu obtained. Furthermore, it is the greatest bound for any strict separating system. This is the reason why we call it the greatest tolerance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Equilibria and Their Bifurcations in a Recurrent Neural Network Involving Iterates of a Transcendental Function

    Publication Year: 2008 , Page(s): 782 - 794
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (717 KB) |  | HTML iconHTML  

    Some practical models contain so complicated mathematical expressions that it is hard to determine the number and distribution of all equilibria, not mentioning the qualitative properties and bifurcations of those equilibria. The three-node recurrent neural network system with two free weight parameters, originally introduced by Ruiz, Owens, and Townley in 1997, is such a system, for which the equation of equilibria involves transcendental function and its iterates. Not computing coordinates of its equilibria, in this paper, we display an effective technique to determine the number and distribution of its equilibria. Without full information about equilibria, our method enables to further study qualitative properties of those equilibria and discuss their saddle node, pitchfork, and Hopf bifurcations by approximating center manifolds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Galerkin/Neural-Network-Based Design of Guaranteed Cost Control for Nonlinear Distributed Parameter Systems

    Publication Year: 2008 , Page(s): 795 - 807
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (549 KB) |  | HTML iconHTML  

    This paper presents a Galerkin/neural-network- based guaranteed cost control (GCC) design for a class of parabolic partial differential equation (PDE) systems with unknown nonlinearities. A parabolic PDE system typically involves a spatial differential operator with eigenspectrum that can be partitioned into a finite-dimensional slow one and an infinite-dimensional stable fast complement. Motivated by this, in the proposed control scheme, Galerkin method is initially applied to the PDE system to derive an ordinary differential equation (ODE) system with unknown nonlinearities, which accurately describes the dynamics of the dominant (slow) modes of the PDE system. The resulting nonlinear ODE system is subsequently parameterized by a multilayer neural network (MNN) with one-hidden layer and zero bias terms. Then, based on the neural model and a Lure-type Lyapunov function, a linear modal feedback controller is developed to stabilize the closed-loop PDE system and provide an upper bound for the quadratic cost function associated with the finite-dimensional slow system for all admissible approximation errors of the network. The outcome of the GCC problem is formulated as a linear matrix inequality (LMI) problem. Moreover, by using the existing LMI optimization technique, a suboptimal guaranteed cost controller in the sense of minimizing the cost bound is obtained. Finally, the proposed design method is applied to the control of the temperature profile of a catalytic rod. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trend Time–Series Modeling and Forecasting With Neural Networks

    Publication Year: 2008 , Page(s): 808 - 816
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1465 KB) |  | HTML iconHTML  

    Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation

    Publication Year: 2008 , Page(s): 817 - 835
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1465 KB) |  | HTML iconHTML  

    This paper considers the design of robust neural network tracking controllers for nonlinear systems. The neural network is used in the closed-loop system to estimate the nonlinear system function. We introduce the conic sector theory to establish a robust neural control system, with guaranteed boundedness for both the input/output (I/O) signals and the weights of the neural network. The neural network is trained by the simultaneous perturbation stochastic approximation (SPSA) method instead of the standard backpropagation (BP) algorithm. The proposed neural control system guarantees closed-loop stability of the estimation system, and a good tracking performance. The performance improvement of the proposed system over existing systems can be quantified in terms of preventing weight shifts, fast convergence, and robustness against system disturbance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multilayer Perceptrons: Approximation Order and Necessary Number of Hidden Units

    Publication Year: 2008 , Page(s): 836 - 844
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    This paper considers the approximation of sufficiently smooth multivariable functions with a multilayer perceptron (MLP). For a given approximation order, explicit formulas for the necessary number of hidden units and its distributions to the hidden layers of the MLP are derived. These formulas depend only on the number of input variables and on the desired approximation order. The concept of approximation order encompasses Kolmogorov-Gabor polynomials or discrete Volterra series, which are widely used in static and dynamic models of nonlinear systems. The results are obtained by considering structural properties of the Taylor polynomials of the function in question and of the MLP function. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability and Hopf Bifurcation of a General Delayed Recurrent Neural Network

    Publication Year: 2008 , Page(s): 845 - 854
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (526 KB) |  | HTML iconHTML  

    In this paper, stability and bifurcation of a general recurrent neural network with multiple time delays is considered, where all the variables of the network can be regarded as bifurcation parameters. It is found that Hopf bifurcation occurs when these parameters pass through some critical values where the conditions for local asymptotical stability of the equilibrium are not satisfied. By analyzing the characteristic equation and using the frequency domain method, the existence of Hopf bifurcation is proved. The stability of bifurcating periodic solutions is determined by the harmonic balance approach, Nyquist criterion, and graphic Hopf bifurcation theorem. Moreover, a critical condition is derived under which the stability is not guaranteed, thus a necessary and sufficient condition for ensuring the local asymptotical stability is well understood, and from which the essential dynamics of the delayed neural network are revealed. Finally, numerical results are given to verify the theoretical analysis, and some interesting phenomena are observed and reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global Asymptotic Stability of Recurrent Neural Networks With Multiple Time-Varying Delays

    Publication Year: 2008 , Page(s): 855 - 873
    Cited by:  Papers (81)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (662 KB) |  | HTML iconHTML  

    In this paper, several sufficient conditions are established for the global asymptotic stability of recurrent neural networks with multiple time-varying delays. The Lyapunov-Krasovskii stability theory for functional differential equations and the linear matrix inequality (LMI) approach are employed in our investigation. The results are shown to be generalizations of some previously published results and are less conservative than existing results. The present results are also applied to recurrent neural networks with constant time delays. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards the Optimal Design of Numerical Experiments

    Publication Year: 2008 , Page(s): 874 - 882
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1041 KB) |  | HTML iconHTML  

    This paper addresses the problem of the optimal design of numerical experiments for the construction of nonlinear surrogate models. We describe a new method, called learner disagreement from experiment resampling (LDR), which borrows ideas from active learning and from resampling methods: the analysis of the divergence of the predictions provided by a population of models, constructed by resampling, allows an iterative determination of the point of input space, where a numerical experiment should be performed in order to improve the accuracy of the predictor. The LDR method is illustrated on neural network models with bootstrap resampling, and on orthogonal polynomials with leave-one-out resampling. Other methods of experimental design such as random selection and D-optimal selection are investigated on the same benchmark problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blur Identification by Multilayer Neural Network Based on Multivalued Neurons

    Publication Year: 2008 , Page(s): 883 - 898
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2938 KB) |  | HTML iconHTML  

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing the Effect of Process Mismatch in a Neuromorphic System Using Spike-Timing-Dependent Adaptation

    Publication Year: 2008 , Page(s): 899 - 913
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1975 KB) |  | HTML iconHTML  

    This paper investigates whether spike-timing-dependent plasticity (STDP) can minimize the effect of mismatch within the context of a depth-from-motion algorithm. To improve noise rejection, this algorithm contains a spike prediction element, whose performance is degraded by analog very large scale integration (VLSI) mismatch. The error between the actual spike arrival time and the prediction is used as the input to an STDP circuit, to improve future predictions. Before STDP adaptation, the error reflects the degree of mismatch within the prediction circuitry. After STDP adaptation, the error indicates to what extent the adaptive circuitry can minimize the effect of transistor mismatch. The circuitry is tested with static and varying prediction times and chip results are presented. The effect of noisy spikes is also investigated. Under all conditions the STDP adaptation is shown to improve performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Bayesian Perspective on Stochastic Neurocontrol

    Publication Year: 2008 , Page(s): 914 - 924
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (475 KB) |  | HTML iconHTML  

    Control design for stochastic uncertain nonlinear systems is traditionally based on minimizing the expected value of a suitably chosen loss function. Moreover, most control methods usually assume the certainty equivalence principle to simplify the problem and make it computationally tractable. We offer an improved probabilistic framework which is not constrained by these previous assumptions, and provides a more natural framework for incorporating and dealing with uncertainty. The focus of this paper is on developing this framework to obtain an optimal control law strategy using a fully probabilistic approach for information extraction from process data, which does not require detailed knowledge of system dynamics. Moreover, the proposed control method framework allows handling the problem of input-dependent noise. A basic paradigm is proposed and the resulting algorithm is discussed. The proposed probabilistic control method is for the general nonlinear class of discrete-time systems. It is demonstrated theoretically on the affine class. A nonlinear simulation example is also provided to validate theoretical development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robot Brains. Circuits and Systems for Conscious Machines (P. Haikonen; 2007) [Book review]

    Publication Year: 2008 , Page(s): 925 - 926
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • World Congress on Computational Intelligence - WCCI 2008

    Publication Year: 2008 , Page(s): 927
    Save to Project icon | Request Permissions | PDF file iconPDF (643 KB)  
    Freely Available from IEEE
  • Why we joined ... [advertisement]

    Publication Year: 2008 , Page(s): 928
    Save to Project icon | Request Permissions | PDF file iconPDF (205 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2008 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope