By Topic

Neural Networks, IEEE Transactions on

Issue 2 • Date March 2007

Filter Results

Displaying Results 1 - 25 of 35
  • Table of contents

    Publication Year: 2007 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2007 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Efficient Optimal Linear Boosting of a Pair of Classifiers

    Publication Year: 2007 , Page(s): 317 - 328
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (531 KB) |  | HTML iconHTML  

    Boosting is a meta-learning algorithm which takes as input a set of classifiers and combines these classifiers to obtain a better classifier. We consider the combinatorial problem of efficiently and optimally boosting a pair of classifiers by reducing this problem to that of constructing the optimal linear separator for two sets of points in two dimensions. Specifically, let each point xisinR2 be assigned a weight W(x)>0, where the weighting function can be an arbitrary positive function. We give efficient (low-order polynomial time) algorithms for constructing an optimal linear "separator" lscr defined as follows. Let Q be the set of points misclassified by lscr. Then, the weight of Q, defined as the sum of the weights of the points in Q, is minimized. If W(z)=1 for all points, then the resulting separator minimizes (exactly) the misclassification error. Without an increase in computational complexity, our algorithm can be extended to output the leave-one-out error, an unbiased estimate of the expected performance of the resulting boosted classifier View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Pyramidal Neural Network For Visual Pattern Recognition

    Publication Year: 2007 , Page(s): 329 - 343
    Cited by:  Papers (19)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2861 KB) |  | HTML iconHTML  

    In this paper, we propose a new neural architecture for classification of visual patterns that is motivated by the two concepts of image pyramids and local receptive fields. The new architecture, called pyramidal neural network (PyraNet), has a hierarchical structure with two types of processing layers: Pyramidal layers and one-dimensional (1-D) layers. In the new network, nonlinear two-dimensional (2-D) neurons are trained to perform both image feature extraction and dimensionality reduction. We present and analyze five training methods for PyraNet [gradient descent (GD), gradient descent with momentum, resilient backpropagation (RPROP), Polak-Ribiere conjugate gradient (CG), and Levenberg-Marquadrt (LM)] and two choices of error functions [mean-square-error (mse) and cross-entropy (CE)]. In this paper, we apply PyraNet to determine gender from a facial image, and compare its performance on the standard facial recognition technology (FERET) database with three classifiers: The convolutional neural network (NN), the k-nearest neighbor (k-NN), and the support vector machine (SVM) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anticipation-Based Temporal Sequences Learning in Hierarchical Structure

    Publication Year: 2007 , Page(s): 344 - 358
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2916 KB) |  | HTML iconHTML  

    Temporal sequence learning is one of the most critical components for human intelligence. In this paper, a novel hierarchical structure for complex temporal sequence learning is proposed. Hierarchical organization, a prediction mechanism, and one-shot learning characterize the model. In the lowest level of the hierarchy, we use a modified Hebbian learning mechanism for pattern recognition. Our model employs both active 0 and active 1 sensory inputs. A winner-take-all (WTA) mechanism is used to select active neurons that become the input for sequence learning at higher hierarchical levels. Prediction is an essential element of our temporal sequence learning model. By correct prediction, the machine indicates it knows the current sequence and does not require additional learning. When the prediction is incorrect, one-shot learning is executed and the machine learns the new input sequence as soon as the sequence is completed. A four-level hierarchical structure that isolates letters, words, sentences, and strophes is used in this paper to illustrate the model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Support Vector Echo-State Machine for Chaotic Time-Series Prediction

    Publication Year: 2007 , Page(s): 359 - 372
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1198 KB) |  | HTML iconHTML  

    A novel chaotic time-series prediction method based on support vector machines (SVMs) and echo-state mechanisms is proposed. The basic idea is replacing "kernel trick" with "reservoir trick" in dealing with nonlinearity, that is, performing linear support vector regression (SVR) in the high-dimension "reservoir" state space, and the solution benefits from the advantages from structural risk minimization principle, and we call it support vector echo-state machines (SVESMs). SVESMs belong to a special kind of recurrent neural networks (RNNs) with convex objective function, and their solution is global, optimal, and unique. SVESMs are especially efficient in dealing with real life nonlinear time series, and its generalization ability and robustness are obtained by regularization operator and robust loss function. The method is tested on the benchmark prediction problem of Mackey-Glass time series and applied to some real life time series such as monthly sunspots time series and runoff time series of the Yellow River, and the prediction results are promising View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multifeedback-Layer Neural Network

    Publication Year: 2007 , Page(s): 373 - 384
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (563 KB) |  | HTML iconHTML  

    The architecture and training procedure of a novel recurrent neural network (RNN), referred to as the multifeedback-layer neural network (MFLNN), is described in this paper. The main difference of the proposed network compared to the available RNNs is that the temporal relations are provided by means of neurons arranged in three feedback layers, not by simple feedback elements, in order to enrich the representation capabilities of the recurrent networks. The feedback layers provide local and global recurrences via nonlinear processing elements. In these feedback layers, weighted sums of the delayed outputs of the hidden and of the output layers are passed through certain activation functions and applied to the feedforward neurons via adjustable weights. Both online and offline training procedures based on the backpropagation through time (BPTT) algorithm are developed. The adjoint model of the MFLNN is built to compute the derivatives with respect to the MFLNN weights which are then used in the training procedures. The Levenberg-Marquardt (LM) method with a trust region approach is used to update the MFLNN weights. The performance of the MFLNN is demonstrated by applying to several illustrative temporal problems including chaotic time series prediction and nonlinear dynamic system identification, and it performed better than several networks available in the literature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-Organizing and Self-Evolving Neurons: A New Neural Network for Optimization

    Publication Year: 2007 , Page(s): 385 - 396
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (933 KB) |  | HTML iconHTML  

    A self-organizing and self-evolving agents (SOSENs) neural network is proposed. Each neuron of the SOSENs evolves itself with a simulated annealing (SA) algorithm. The self-evolving behavior of each neuron is a local improvement that results in speeding up the convergence. The chance of reaching the global optimum is increased because multiple SAs are run in a searching space. Optimum results obtained by the SOSENs are better in average than those obtained by a single SA. Experimental results show that the SOSENs have less temperature changes than the SA to reach the global minimum. Every neuron exhibits a self-organizing behavior, which is similar to those of the self-organizing map (SOM), particle swarm optimization (PSO), and self-organizing migrating algorithm (SOMA). At last, the computational time of parallel SOSENs can be less than the SA View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Incremental Hierarchical Discriminant Regression

    Publication Year: 2007 , Page(s): 397 - 415
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1891 KB) |  | HTML iconHTML  

    This paper presents incremental hierarchical discriminant regression (IHDR) which incrementally builds a decision tree or regression tree for very high-dimensional regression or decision spaces by an online, real-time learning system. Biologically motivated, it is an approximate computational model for automatic development of associative cortex, with both bottom-up sensory inputs and top-down motor projections. At each internal node of the IHDR tree, information in the output space is used to automatically derive the local subspace spanned by the most discriminating features. Embedded in the tree is a hierarchical probability distribution model used to prune very unlikely cases during the search. The number of parameters in the coarse-to-fine approximation is dynamic and data-driven, enabling the IHDR tree to automatically fit data with unknown distribution shapes (thus, it is difficult to select the number of parameters up front). The IHDR tree dynamically assigns long-term memory to avoid the loss-of-memory problem typical with a global-fitting learning algorithm for neural networks. A major challenge for an incrementally built tree is that the number of samples varies arbitrarily during the construction process. An incrementally updated probability model, called sample-size-dependent negative-log-likelihood (SDNLL) metric is used to deal with large sample-size cases, small sample-size cases, and unbalanced sample-size cases, measured among different internal nodes of the IHDR tree. We report experimental results for four types of data: synthetic data to visualize the behavior of the algorithms, large face image data, continuous video stream from robot navigation, and publicly available data sets that use human defined features View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability and Hopf Bifurcation in a Simplified BAM Neural Network With Two Time Delays

    Publication Year: 2007 , Page(s): 416 - 430
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB) |  | HTML iconHTML  

    Various local periodic solutions may represent different classes of storage patterns or memory patterns, and arise from the different equilibrium points of neural networks (NNs) by applying Hopf bifurcation technique. In this paper, a bidirectional associative memory NN with four neurons and multiple delays is considered. By applying the normal form theory and the center manifold theorem, analysis of its linear stability and Hopf bifurcation is performed. An algorithm is worked out for determining the direction and stability of the bifurcated periodic solutions. Numerical simulation results supporting the theoretical analysis are also given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MCES: A Novel Monte Carlo Evaluative Selection Approach for Objective Feature Selections

    Publication Year: 2007 , Page(s): 431 - 448
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (717 KB) |  | HTML iconHTML  

    Most recent research efforts on feature selection have focused mainly on classification task due to its popularity in the data-mining community. However, feature selection research in nonlinear system estimations has been very limited. Hence, it is reasonable to devise a feature selection approach that is computationally efficient on nonlinear system estimations context. A novel feature selection approach, the Monte Carlo evaluative selection (MCES), is proposed in this paper. MCES is an objective sampling method that derives a better estimation of the relevancy measure. The algorithm is objectively designed to be applicable to both classification and nonlinear regressive tasks. The MCES method has been demonstrated to perform well with four sets of experiments, consisting of two classification and two regressive tasks. The results demonstrate that the MCES method has following strong advantages: 1) ability to identify correlated and irrelevant features based on weight ranking, 2) application to both nonlinear system estimation and classification tasks, and 3) independence of the underlying induction algorithms used to derive the performance measures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance-Oriented Antiwindup for a Class of Linear Control Systems With Augmented Neural Network Controller

    Publication Year: 2007 , Page(s): 449 - 465
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1183 KB) |  | HTML iconHTML  

    This paper presents a conditioning scheme for a linear control system which is enhanced by a neural network (NN) controller and subjected to a control signal amplitude limit. The NN controller improves the performance of the linear control system by directly estimating an actuator-matched, unmodeled, nonlinear disturbance, in closed-loop, and compensating for it. As disturbances are generally known to be bounded, the nominal NN-control element is modified to keep its output below the disturbance bound. The linear control element is conditioned by an antiwindup (AW) compensator which ensures performance close to the nominal controller and swift recovery from saturation. For this, the AW compensator proposed is of low order, designed using convex linear matrix inequalities (LMIs) optimization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

    Publication Year: 2007 , Page(s): 466 - 481
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1220 KB) |  | HTML iconHTML  

    A recurrent neural network for the optimal control of a group of interconnected dynamic systems is presented in this paper. On the basis of decomposition and coordination strategy for interconnected dynamic systems, the proposed neural network has a two-level hierarchical structure: several local optimization subnetworks at the lower level and one coordination subnetwork at the upper level. A goal-coordination method is used to coordinate the interactions between the subsystems. By nesting the dynamic equations of the subsystems into their corresponding local optimization subnetworks, the number of dimensions of the neural network can be reduced significantly. Furthermore, the subnetworks at both the lower and upper levels can work concurrently. Therefore, the computation efficiency, in comparison with the consecutive executions of numerical algorithms on digital computers, is increased dramatically. The proposed method is extended to the case where the control inputs of the subsystems are bounded. The stability analysis shows that the proposed neural network is asymptotically stable. Finally, an example is presented which demonstrates the satisfactory performance of the neural network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Several Extensions in Methods for Adaptive Output Feedback Control

    Publication Year: 2007 , Page(s): 482 - 494
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1412 KB) |  | HTML iconHTML  

    Several extensions to neural network (NN)-based adaptive output feedback control of nonlinear systems are developed. An extension from linearly to nonlinearly parameterized NNs is given for a direct adaptive output feedback approach. An extension that permits the introduction of e-modification in both the direct adaptive approach and an error-observer-based approach is also given. Finally, for the case of nonaffine systems, we eliminate a fixed-point assumption that has appeared in earlier work View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Output Feedback Tracking Control for Time-Delay Nonlinear Systems Using Neural Network

    Publication Year: 2007 , Page(s): 495 - 505
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB) |  | HTML iconHTML  

    In this paper, the problem of robust output tracking control for a class of time-delay nonlinear systems is considered. The systems are in the form of triangular structure with unmodeled dynamics. First, we construct an observer whose gain matrix is scheduled via linear matrix inequality approach. For the case that the information of uncertainties bounds is not completely available, we design an observer-based neural network (NN) controller by employing the backstepping method. The resulting closed-loop system is ensured to be stable in the sense of semiglobal boundedness with the help of changing supplying function idea. The observer and the controller designed are both independent of the time delays. Finally, numerical simulations are conducted to verify the effectiveness of the main theoretic results obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weighted Piecewise LDA for Solving the Small Sample Size Problem in Face Verification

    Publication Year: 2007 , Page(s): 506 - 519
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (673 KB) |  | HTML iconHTML  

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neuromorphic Excitable Maps for Visual Processing

    Publication Year: 2007 , Page(s): 520 - 529
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2377 KB) |  | HTML iconHTML  

    An excitable membrane is described which can perform different visual tasks such as contour detection, contour propagation, image segmentation, and motion detection. The membrane is designed to fit into a neuromorphic multichip system. It consists of a single two-dimensional (2-D) layer of locally connected integrate-and-fire neurons and propagates input in the subthreshold and the above-threshold range. It requires adjustment of only one parameter to switch between the visual tasks. The performance of two spiking membranes of different connectivity is compared, a hexagonally and an octagonally connected membrane. Their hardware and system suitability is discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face Recognition Using an Enhanced Independent Component Analysis Approach

    Publication Year: 2007 , Page(s): 530 - 541
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2311 KB) |  | HTML iconHTML  

    This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Cardiac Resynchronization Therapy Device Based on Spiking Neurons Architecture and Reinforcement Learning Scheme

    Publication Year: 2007 , Page(s): 542 - 550
    Cited by:  Papers (3)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1517 KB) |  | HTML iconHTML  

    Spiking neural network (NN) architecture that uses Hebbian learning and reinforcement-learning schemes for adapting the synaptic weights is implemented in silicon and performs dynamic optimization according to hemodynamic sensor for a cardiac resynchronization therapy (CRT) device. The spiking NN architecture dynamically changes the atrioventricular (AV) delay and interventricular (VV) interval parameters according to the information provided by the intracardiac electrograms (IEGMs) and hemodynamic sensors. The spiking NN coprocessor performs the adaptive part and is controlled by a deterministic algorithm master controller. The simulated cardiac output obtained with the adaptive CRT device is 30% higher than with a nonadaptive CRT device and is likely to provide improvement in the quality of life for patients with congestive heart failure. The spiking NN architecture shows synaptic plasticity acquired during the learning process. The synaptic plasticity is manifested by a dynamic learning rate parameter that correlates patterns of hemodynamic sensor with the system outputs, i.e., the optimal AV and VV pacing intervals View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive WTA With an Analog VLSI Neuromorphic Learning Chip

    Publication Year: 2007 , Page(s): 551 - 572
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1023 KB) |  | HTML iconHTML  

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global Reinforcement Learning in Neural Networks

    Publication Year: 2007 , Page(s): 573 - 577
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB) |  | HTML iconHTML  

    In this letter, we have found a more general formulation of the REward Increment = Nonnegative Factor times Offset Reinforcement times Characteristic Eligibility (REINFORCE) learning principle first suggested by Williams. The new formulation has enabled us to apply the principle to global reinforcement learning in networks with various sources of randomness, and to suggest several simple local rules for such networks. Numerical simulations have shown that for simple classification and reinforcement learning tasks, at least one family of the new learning rules gives results comparable to those provided by the famous Rules Ar-i and Ar-p for the Boltzmann machines View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Equilibrium-Based Support Vector Machine for Semisupervised Classification

    Publication Year: 2007 , Page(s): 578 - 583
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (626 KB) |  | HTML iconHTML  

    A novel learning algorithm for semisupervised classification is proposed. The proposed method first constructs a support function that estimates a support of a data distribution using both labeled and unlabeled data. Then, it partitions a whole data space into a small number of disjoint regions with the aid of a dynamical system. Finally, it labels the decomposed regions utilizing the labeled data and the cluster structure described by the constructed support function. Simulation results show the effectiveness of the proposed method to label out-of-sample unlabeled test data as well as in-sample unlabeled data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Rosenblatt Bayesian Algorithm Learning in a Nonstationary Environment

    Publication Year: 2007 , Page(s): 584 - 588
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (251 KB) |  | HTML iconHTML  

    In this letter, we study online learning in neural networks (NNs) obtained by approximating Bayesian learning. The approach is applied to Gibbs learning with the Rosenblatt potential in a nonstationary environment. The online scheme is obtained by the minimization (maximization) of the Kullback-Leibler divergence (cross entropy) between the true posterior distribution and the parameterized one. The complexity of the learning algorithm is further decreased by projecting the posterior onto a Gaussian distribution and imposing a spherical covariance matrix. We study in detail the particular case of learning linearly separable rules. In the case of a fixed rule, we observe an asymptotic generalization error egpropalpha-1 for both the spherical and the full covariance matrix approximations. However, in the case of drifting rule, only the full covariance matrix algorithm shows a good performance. This good performance is indeed a surprise since the algorithm is obtained by projecting without the benefit of the extra information on drifting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensor Integration for Satellite-Based Vehicular Navigation Using Neural Networks

    Publication Year: 2007 , Page(s): 589 - 594
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1066 KB) |  | HTML iconHTML  

    Land vehicles rely mainly on global positioning system (GPS) to provide their position with consistent accuracy. However, GPS receivers may encounter frequent GPS outages within urban areas where satellite signals are blocked. In order to overcome this problem, GPS is usually combined with inertial sensors mounted inside the vehicle to obtain a reliable navigation solution, especially during GPS outages. This letter proposes a data fusion technique based on radial basis function neural network (RBFNN) that integrates GPS with inertial sensors in real time. A field test data was used to examine the performance of the proposed data fusion module and the results discuss the merits and the limitations of the proposed technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Chaotic PSO-Based Neural Network Predictive Control for Nonlinear Process

    Publication Year: 2007 , Page(s): 595 - 601
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB) |  | HTML iconHTML  

    In this letter, a novel nonlinear neural network (NN) predictive control strategy based on the new tent-map chaotic particle swarm optimization (TCPSO) is presented. The TCPSO incorporating tent-map chaos, which can avoid trapping to local minima and improve the searching performance of standard particle swarm optimization (PSO), is applied to perform the nonlinear optimization to enhance the convergence and accuracy. Numerical simulations of two benchmark functions are used to test the performance of TCPSO. Furthermore, simulation on a nonlinear plant is given to illustrate the effectiveness of the proposed control scheme View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope