By Topic

Neural Networks, IEEE Transactions on

Issue 8 • Date Aug. 2010

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Publication Year: 2010 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • On Some Necessary and Sufficient Conditions for a Recurrent Neural Network Model With Time Delays to Generate Oscillations

    Publication Year: 2010 , Page(s): 1197 - 1205
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1496 KB) |  | HTML iconHTML  

    In this paper, the existence of oscillations for a class of recurrent neural networks with time delays between neural interconnections is investigated. By using the fixed point theory and Liapunov functional, we prove that a recurrent neural network might have a unique equilibrium point which is unstable. This particular type of instability, combined with the boundedness of the solutions of the system, will force the network to generate a permanent oscillation. Some necessary and sufficient conditions for these oscillations are obtained. Simple and practical criteria for fixing the range of parameters in this network are also derived. Typical simulation examples are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fast Algorithm for Robust Mixtures in the Presence of Measurement Errors

    Publication Year: 2010 , Page(s): 1206 - 1220
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1258 KB) |  | HTML iconHTML  

    In experimental and observational sciences, detecting atypical, peculiar data from large sets of measurements has the potential of highlighting candidates of interesting new types of objects that deserve more detailed domain-specific followup study. However, measurement data is nearly never free of measurement errors. These errors can generate false outliers that are not truly interesting. Although many approaches exist for finding outliers, they have no means to tell to what extent the peculiarity is not simply due to measurement errors. To address this issue, we have developed a model-based approach to infer genuine outliers from multivariate data sets when measurement error information is available. This is based on a probabilistic mixture of hierarchical density models, in which parameter estimation is made feasible by a tree-structured variational expectation-maximization algorithm. Here, we further develop an algorithmic enhancement to address the scalability of this approach, in order to make it applicable to large data sets, via a K-dimensional-tree based partitioning of the variational posterior assignments. This creates a non-trivial tradeoff between a more detailed noise model to enhance the detection accuracy, and the coarsened posterior representation to obtain computational speedup. Hence, we conduct extensive experimental validation to study the accuracy/speed tradeoffs achievable in a variety of data conditions. We find that, at low-to-moderate error levels, a speedup factor that is at least linear in the number of data points can be achieved without significantly sacrificing the detection accuracy. The benefits of including measurement error information into the modeling is evident in all situations, and the gain roughly recovers the loss incurred by the speedup procedure in large error conditions. We analyze and discuss in detail the characteristics of our algorithm based on results obtained on appropriately designed synthetic data experimen- - ts, and we also demonstrate its working in a real application example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind Multiuser Detector for Chaos-Based CDMA Using Support Vector Machine

    Publication Year: 2010 , Page(s): 1221 - 1231
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2165 KB) |  | HTML iconHTML  

    The algorithm and the results of a blind multiuser detector using a machine learning technique called support vector machine (SVM) on a chaos-based code division multiple access system is presented in this paper. Simulation results showed that the performance achieved by using SVM is comparable to existing minimum mean square error (MMSE) detector under both additive white Gaussian noise (AWGN) and Rayleigh fading conditions. However, unlike the MMSE detector, the SVM detector does not require the knowledge of spreading codes of other users in the system or the estimate of the channel noise variance. The optimization of this algorithm is considered in this paper and its complexity is compared with the MMSE detector. This detector is much more suitable to work in the forward link than MMSE. In addition, original theoretical bit-error rate expressions for the SVM detector under both AWGN and Rayleigh fading are derived to verify the simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Selection of Weight Decay Parameter for Faulty Networks

    Publication Year: 2010 , Page(s): 1232 - 1244
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1485 KB) |  | HTML iconHTML  

    The weight-decay technique is an effective approach to handle overfitting and weight fault. For fault-free networks, without an appropriate value of decay parameter, the trained network is either overfitted or underfitted. However, many existing results on the selection of decay parameter focus on fault-free networks only. It is well known that the weight-decay method can also suppress the effect of weight fault. For the faulty case, using a test set to select the decay parameter is not practice because there are huge number of possible faulty networks for a trained network. This paper develops two mean prediction error (MPE) formulae for predicting the performance of faulty radial basis function (RBF) networks. Two fault models, multiplicative weight noise and open weight fault, are considered. Our MPE formulae involve the training error and trained weights only. Besides, in our method, we do not need to generate a huge number of faulty networks to measure the test error for the fault situation. The MPE formulae allow us to select appropriate values of decay parameter for faulty networks. Our experiments showed that, although there are small differences between the true test errors (from the test set) and the MPE values, the MPE formulae can accurately locate the appropriate value of the decay parameter for minimizing the true test error of faulty networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control of Unknown Nonlinear Systems With Efficient Transient Performance Using Concurrent Exploitation and Exploration

    Publication Year: 2010 , Page(s): 1245 - 1261
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (988 KB) |  | HTML iconHTML  

    Learning mechanisms that operate in unknown environments should be able to efficiently deal with the problem of controlling unknown dynamical systems. Many approaches that deal with such a problem face the so-called exploitation-exploration dilemma where the controller has to sacrifice efficient performance for the sake of learning “better” control strategies than the ones already known: during the exploration period, poor or even unstable closed-loop system performance may be exhibited. In this paper, we show that, in the case where the control goal is to stabilize an unknown dynamical system by means of state feedback, exploitation and exploration can be concurrently performed without the need of sacrificing efficiency. This is made possible through an appropriate combination of recent results developed by the author in the areas of adaptive control and adaptive optimization and a new result on the convex construction of control Lyapunov functions for nonlinear systems. The resulting scheme guarantees arbitrarily good performance in the regions where the system is controllable. Theoretical analysis as well as simulation results on a particularly challenging control problem verify such a claim. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Backpropagation and Ordered Derivatives in the Time Scales Calculus

    Publication Year: 2010 , Page(s): 1262 - 1269
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (407 KB) |  | HTML iconHTML  

    Backpropagation is the most widely used neural network learning technique. It is based on the mathematical notion of an ordered derivative. In this paper, we present a formulation of ordered derivatives and the backpropagation training algorithm using the important emerging area of mathematics known as the time scales calculus. This calculus, with its potential for application to a wide variety of inter-disciplinary problems, is becoming a key area of mathematics. It is capable of unifying continuous and discrete analysis within one coherent theoretical framework. Using this calculus, we present here a generalization of backpropagation which is appropriate for cases beyond the specifically continuous or discrete. We develop a new multivariate chain rule of this calculus, define ordered derivatives on time scales, prove a key theorem about them, and derive the backpropagation weight update equations for a feedforward multilayer neural network architecture. By drawing together the time scales calculus and the area of neural network learning, we present the first connection of two major fields of research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximate Robust Policy Iteration Using Multilayer Perceptron Neural Networks for Discounted Infinite-Horizon Markov Decision Processes With Uncertain Correlated Transition Matrices

    Publication Year: 2010 , Page(s): 1270 - 1280
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB) |  | HTML iconHTML  

    We study finite-state, finite-action, discounted infinite-horizon Markov decision processes with uncertain correlated transition matrices in deterministic policy spaces. Existing robust dynamic programming methods cannot be extended to solving this class of general problems. In this paper, based on a robust optimality criterion, an approximate robust policy iteration using a multilayer perceptron neural network is proposed. It is proven that the proposed algorithm converges in finite iterations, and it converges to a stationary optimal or near-optimal policy in a probability sense. In addition, we point out that sometimes even a direct enumeration may not be applicable to addressing this class of problems. However, a direct enumeration based on our proposed maximum value approximation over the parameter space is a feasible approach. We provide further analysis to show that our proposed algorithm is more efficient than such an enumeration method for various scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Induction of Projection Pursuit Indices

    Publication Year: 2010 , Page(s): 1281 - 1295
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1998 KB) |  | HTML iconHTML  

    Projection techniques are frequently used as the principal means for the implementation of feature extraction and dimensionality reduction for machine learning applications. A well established and broad class of such projection techniques is the projection pursuit (PP). Its core design parameter is a projection index, which is the driving force in obtaining the transformation function via optimization, and represents in an explicit or implicit way the user's perception of the useful information contained within the datasets. This paper seeks to address the problem related to the design of PP index functions for the linear feature extraction case. We achieve this using an evolutionary search framework, capable of building new indices to fit the properties of the available datasets. The high expressive power of this framework is sustained by a rich set of function primitives. The performance of several PP indices previously proposed by human experts is compared with these automatically generated indices for the task of classification, and results show a decrease in the classification errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Support Vector Data Descriptions for Novelty Detection

    Publication Year: 2010 , Page(s): 1296 - 1313
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1543 KB) |  | HTML iconHTML  

    Support vector data description (SVDD) has become a very attractive kernel method due to its good results in many novelty detection problems. However, the decision function of SVDD is expressed in terms of the kernel expansion, which results in a run-time complexity linear in the number of support vectors. For applications where fast real-time response is needed, how to speed up the decision function is crucial. This paper aims at dealing with the issue of reducing the testing time complexity of SVDD. A method called fast SVDD (F-SVDD) is proposed. Unlike the traditional methods which all try to compress a kernel expansion into one with fewer terms, the proposed F-SVDD directly finds the preimage of a feature vector, and then uses a simple relationship between this feature vector and the SVDD sphere center to re-express the center with a single vector. The decision function of F-SVDD contains only one kernel term, and thus the decision boundary of F-SVDD is only spherical in the original space. Hence, the run-time complexity of the F-SVDD decision function is no longer linear in the support vectors, but is a constant, no matter how large the training set size is. In this paper, we also propose a novel direct preimage-finding method, which is noniterative and involves no free parameters. The unique preimage can be obtained in real time by the proposed direct method without taking trial-and-error. For demonstration, several real-world data sets and a large-scale data set, the extended MIT face data set, are used in experiments. In addition, a practical industry example regarding liquid crystal display micro-defect inspection is also used to compare the applicability of SVDD and our proposed F-SVDD when faced with mass data input. The results are very encouraging. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Exponential Stability of Markovian Jump Impulsive Stochastic Cohen-Grossberg Neural Networks With Mixed Time Delays

    Publication Year: 2010 , Page(s): 1314 - 1325
    Cited by:  Papers (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (549 KB) |  | HTML iconHTML  

    This paper is concerned with the problem of exponential stability for a class of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays and known or unknown parameters. The jumping parameters are determined by a continuous-time, discrete-state Markov chain, and the mixed time delays under consideration comprise both time-varying delays and continuously distributed delays. To the best of the authors' knowledge, till now, the exponential stability problem for this class of generalized neural networks has not yet been solved since continuously distributed delays are considered in this paper. The main objective of this paper is to fill this gap. By constructing a novel Lyapunov-Krasovskii functional, and using some new approaches and techniques, several novel sufficient conditions are obtained to ensure the exponential stability of the trivial solution in the mean square. The results presented in this paper generalize and improve many known results. Finally, two numerical examples and their simulations are given to show the effectiveness of the theoretical results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Extension of the Standard Mixture Model for Image Segmentation

    Publication Year: 2010 , Page(s): 1326 - 1338
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4796 KB) |  | HTML iconHTML  

    Standard Gaussian mixture modeling (GMM) is a well-known method for image segmentation. However, the pixels themselves are considered independent of each other, making the segmentation result sensitive to noise. To reduce the sensitivity of the segmented result with respect to noise, Markov random field (MRF) models provide a powerful way to account for spatial dependences between image pixels. However, their main drawback is that they are computationally expensive to implement, and require large numbers of parameters. Based on these considerations, we propose an extension of the standard GMM for image segmentation, which utilizes a novel approach to incorporate the spatial relationships between neighboring pixels into the standard GMM. The proposed model is easy to implement and compared with MRF models, requires lesser number of parameters. We also propose a new method to estimate the model parameters in order to minimize the higher bound on the data negative log-likelihood, based on the gradient method. Experimental results obtained on noisy synthetic and real world grayscale images demonstrate the robustness, accuracy and effectiveness of the proposed model in image segmentation, as compared to other methods based on standard GMM and MRF models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Control for Output Feedback Nonlinear Systems Using a Barrier Lyapunov Function

    Publication Year: 2010 , Page(s): 1339 - 1345
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (377 KB) |  | HTML iconHTML  

    In this brief, adaptive neural control is presented for a class of output feedback nonlinear systems in the presence of unknown functions. The unknown functions are handled via on-line neural network (NN) control using only output measurements. A barrier Lyapunov function (BLF) is introduced to address two open and challenging problems in the neuro-control area: 1) for any initial compact set, how to determine a priori the compact superset, on which NN approximation is valid; and 2) how to ensure that the arguments of the unknown functions remain within the specified compact superset. By ensuring boundedness of the BLF, we actively constrain the argument of the unknown functions to remain within a compact superset such that the NN approximation conditions hold. The semiglobal boundedness of all closed-loop signals is ensured, and the tracking error converges to a neighborhood of zero. Simulation results demonstrate the effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Marginalized Neural Network Mixtures for Large-Scale Regression

    Publication Year: 2010 , Page(s): 1345 - 1351
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (546 KB) |  | HTML iconHTML  

    For regression tasks, traditional neural networks (NNs) have been superseded by Gaussian processes, which provide probabilistic predictions (input-dependent error bars), improved accuracy, and virtually no overfitting. Due to their high computational cost, in scenarios with massive data sets, one has to resort to sparse Gaussian processes, which strive to achieve similar performance with much smaller computational effort. In this context, we introduce a mixture of NNs with marginalized output weights that can both provide probabilistic predictions and improve on the performance of sparse Gaussian processes, at the same computational cost. The effectiveness of this approach is shown experimentally on some representative large data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural-Network-Based Adaptive Leader-Following Control for Multiagent Systems With Uncertainties

    Publication Year: 2010 , Page(s): 1351 - 1358
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (847 KB) |  | HTML iconHTML  

    A neural-network-based adaptive approach is proposed for the leader-following control of multiagent systems. The neural network is used to approximate the agent's uncertain dynamics, and the approximation error and external disturbances are counteracted by employing the robust signal. When there is no control input constraint, it can be proved that all the following agents can track the leader's time-varying state with the tracking error as small as desired. Compared with the related work in the literature, the uncertainty in the agent's dynamics is taken into account; the leader's state could be time-varying; and the proposed algorithm for each following agent is only dependent on the information of its neighbor agents. Finally, the satisfactory performance of the proposed method is illustrated by simulation examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exponential {\rm H}_{\infty } Synchronization of General Discrete-Time Chaotic Neural Networks With or Without Time Delays

    Publication Year: 2010 , Page(s): 1358 - 1365
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    This brief studies exponential H synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay-Derivative-Dependent Stability for Delayed Neural Networks With Unbound Distributed Delay

    Publication Year: 2010 , Page(s): 1365 - 1371
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (257 KB) |  | HTML iconHTML  

    In this brief, based on Lyapunov-Krasovskii functional approach and appropriate integral inequality, a new sufficient condition is derived to guarantee the global stability for delayed neural networks with unbounded distributed delay, in which the improved delay-partitioning technique and general convex combination are employed. The LMI-based criterion heavily depends on both the upper and lower bounds on time delay and its derivative, which is different from the existent ones and has wider application fields than some present results. Finally, three numerical examples can illustrate the efficiency of the new method based on the reduced conservatism which can be achieved by thinning the delay interval. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multistability of Recurrent Neural Networks With Time-varying Delays and the Piecewise Linear Activation Function

    Publication Year: 2010 , Page(s): 1371 - 1377
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (471 KB) |  | HTML iconHTML  

    In this brief, stability of multiple equilibria of recurrent neural networks with time-varying delays and the piecewise linear activation function is studied. A sufficient condition is obtained to ensure that n-neuron recurrent neural networks can have (4k-1)n equilibrium points and (2k)n of them are locally exponentially stable. This condition improves and extends the existing stability results in the literature. Simulation results are also discussed in one illustrative example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2011 International Joint Conference on Neural Networks

    Publication Year: 2010 , Page(s): 1378
    Save to Project icon | Request Permissions | PDF file iconPDF (870 KB)  
    Freely Available from IEEE
  • Access over 1 million articles - The IEEE Digital Library [advertisement]

    Publication Year: 2010 , Page(s): 1379
    Save to Project icon | Request Permissions | PDF file iconPDF (370 KB)  
    Freely Available from IEEE
  • Why we joined ... [advertisement]

    Publication Year: 2010 , Page(s): 1380
    Save to Project icon | Request Permissions | PDF file iconPDF (205 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2010 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks Information for authors

    Publication Year: 2010 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope