By Topic

Neural Networks, IEEE Transactions on

Issue 4 • Date April 2008

Filter Results

Displaying Results 1 - 22 of 22
  • Table of contents

    Publication Year: 2008 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2008 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Delay-Dependent Criteria for Global Robust Periodicity of Uncertain Switched Recurrent Neural Networks With Time-Varying Delay

    Publication Year: 2008 , Page(s): 549 - 557
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (479 KB) |  | HTML iconHTML  

    In this paper, we introduce some ideas of switched systems into the field of neural networks and a large class of switched recurrent neural networks (SRNNs) with time-varying structured uncertainties and time-varying delay is investigated. Some delay-dependent robust periodicity criteria guaranteeing the existence, uniqueness, and global asymptotic stability of periodic solution for all admissible parametric uncertainties are devised by taking the relationship between the terms in the Leibniz-Newton formula into account. Because free weighting matrices are used to express this relationship and the appropriate ones are selected by means of linear matrix inequalities (LMIs), the criteria are less conservative than existing ones reported in the literature for delayed neural networks with parameter uncertainties. Some examples are given to show that the proposed criteria are effective and are an improvement over previous ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A One-Layer Recurrent Neural Network With a Discontinuous Hard-Limiting Activation Function for Quadratic Programming

    Publication Year: 2008 , Page(s): 558 - 570
    Cited by:  Papers (58)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (842 KB) |  | HTML iconHTML  

    In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Locality-Preserved Maximum Information Projection

    Publication Year: 2008 , Page(s): 571 - 585
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1681 KB) |  | HTML iconHTML  

    Dimensionality reduction is usually involved in the domains of artificial intelligence and machine learning. Linear projection of features is of particular interest for dimensionality reduction since it is simple to calculate and analytically analyze. In this paper, we propose an essentially linear projection technique, called locality-preserved maximum information projection (LPMIP), to identify the underlying manifold structure of a data set. LPMIP considers both the within-locality and the between-locality in the processing of manifold learning. Equivalently, the goal of LPMIP is to preserve the local structure while maximize the out-of-locality (global) information of the samples simultaneously. Different from principal component analysis (PCA) that aims to preserve the global information and locality-preserving projections (LPPs) that is in favor of preserving the local structure of the data set, LPMIP seeks a tradeoff between the global and local structures, which is adjusted by a parameter alpha, so as to find a sub- space that detects the intrinsic manifold structure for classification tasks. Computationally, by constructing the adjacency matrix, LPMIP is formulated as an eigenvalue problem. LPMIP yields orthogonal basis functions, and completely avoids the singularity problem as it exists in LPP. Further, we develop an efficient and stable LPMIP/QR algorithm for implementing LPMIP, especially, on high-dimensional data set. Theoretical analysis shows that conventional linear projection methods such as (weighted) PCA, maximum margin criterion (MMC), linear discriminant analysis (LDA), and LPP could be derived from the LPMIP framework by setting different graph models and constraints. Extensive experiments on face, digit, and facial expression recognition show the effectiveness of the proposed LPMIP method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shared Feature Extraction for Nearest Neighbor Face Recognition

    Publication Year: 2008 , Page(s): 586 - 595
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1116 KB) |  | HTML iconHTML  

    In this paper, we propose a new supervised linear feature extraction technique for multiclass classification problems that is specially suited to the nearest neighbor classifier (NN). The problem of finding the optimal linear projection matrix is defined as a classification problem and the Adaboost algorithm is used to compute it in an iterative way. This strategy allows the introduction of a multitask learning (MTL) criterion in the method and results in a solution that makes no assumptions about the data distribution and that is specially appropriated to solve the small sample size problem. The performance of the method is illustrated by an application to the face recognition problem. The experiments show that the representation obtained following the multitask approach improves the classic feature extraction algorithms when using the NN classifier, especially when we have a few examples from each class. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complex ICA by Negentropy Maximization

    Publication Year: 2008 , Page(s): 596 - 609
    Cited by:  Papers (36)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1133 KB) |  | HTML iconHTML  

    In this paper, we use complex analytic functions to achieve independent component analysis (ICA) by maximization of non-Gaussianity and introduce the complex maximization of non-Gaussianity (CMN) algorithm. We derive both a gradient-descent and a quasi-Newton algorithm that use the full second-order statistics providing superior performance with circular and noncircular sources as compared to existing methods. We show the connection among ICA methods through maximization of non-Gaussianity, mutual information, and maximum likelihood (ML) for the complex case, and emphasize the importance of density matching for all three cases. Local stability conditions are derived for the CMN cost function that explicitly show the effects of noncircularity on convergence and demonstrated through simulation examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large-Scale Maximum Margin Discriminant Analysis Using Core Vector Machines

    Publication Year: 2008 , Page(s): 610 - 624
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1020 KB) |  | HTML iconHTML  

    Large-margin methods, such as support vector machines (SVMs), have been very successful in classification problems. Recently, maximum margin discriminant analysis (MMDA) was proposed that extends the large-margin idea to feature extraction. It often outperforms traditional methods such as kernel principal component analysis (KPCA) and kernel Fisher discriminant analysis (KFD). However, as in the SVM, its time complexity is cubic in the number of training points m, and is thus computationally inefficient on massive data sets. In this paper, we propose an (1 + isin)2-approximation algorithm for obtaining the MMDA features by extending the core vector machine. The resultant time complexity is only linear in m, while its space complexity is independent of m. Extensive comparisons with the original MMDA, KPCA, and KFD on a number of large data sets show that the proposed feature extractor can improve classification accuracy, and is also faster than these kernel-based methods by over an order of magnitude. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DCT-Yager FNN: A Novel Yager-Based Fuzzy Neural Network With the Discrete Clustering Technique

    Publication Year: 2008 , Page(s): 625 - 644
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3021 KB) |  | HTML iconHTML  

    Earlier clustering techniques such as the modified learning vector quantization (MLVQ) and the fuzzy Kohonen partitioning (FKP) techniques have focused on the derivation of a certain set of parameters so as to define the fuzzy sets in terms of an algebraic function. The fuzzy membership functions thus generated are uniform, normal, and convex. Since any irregular training data is clustered into uniform fuzzy sets (Gaussian, triangular, or trapezoidal), the clustering may not be exact and some amount of information may be lost. In this paper, two clustering techniques using a Kohonen-like self-organizing neural network architecture, namely, the unsupervised discrete clustering technique (UDCT) and the supervised discrete clustering technique (SDCT), are proposed. The UDCT and SDCT algorithms reduce this data loss by introducing nonuniform, normal fuzzy sets that are not necessarily convex. The training data range is divided into discrete points at equal intervals, and the membership value corresponding to each discrete point is generated. Hence, the fuzzy sets obtained contain pairs of values, each pair corresponding to a discrete point and its membership grade. Thus, it can be argued that fuzzy membership functions generated using this kind of a discrete methodology provide a more accurate representation of the actual input data. This fact has been demonstrated by comparing the membership functions generated by the UDCT and SDCT algorithms against those generated by the MLVQ, FKP, and pseudofuzzy Kohonen partitioning (PFKP) algorithms. In addition to these clustering techniques, a novel pattern classifying network called the Yager fuzzy neural network (FNN) is proposed in this paper. This network corresponds completely to the Yager inference rule and exhibits remarkable generalization abilities. A modified version of the pseudo-outer product (POP)-Yager FNN called the modified Yager FNN is introduced that eliminates the drawbacks of the earlier network and yields su- - perior performance. Extensive experiments have been conducted to test the effectiveness of these two networks, using various clustering algorithms. It follows that the SDCT and UDCT clustering algorithms are particularly suited to networks based on the Yager inference rule. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-Time Reconfigurable Subthreshold CMOS Perceptron

    Publication Year: 2008 , Page(s): 645 - 657
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1082 KB) |  | HTML iconHTML  

    In this paper, a new, real-time reconfigurable perceptron circuit element is presented. A six-transistor version used as a threshold gate, having a fan-in of three, producing adequate outputs for threshold of T = 1,2 and 3 is demonstrated by chip measurements. Subthreshold operation for supply voltages in the range of 100-350 mV is shown. The circuit performs competitively with a standard static complimentary metal-oxide-semiconductor (CMOS) implementation when maximum speed and energy delay product are taken into account, when used in a ring oscillator. Functionality per transistor is, to our knowledge, the highest reported for a variety of comparable circuits not based on floating gate techniques. Statistical simulations predict probabilities for making working circuits under mismatch and process variations. The simulations, in 120-nm CMOS, also support discussions regarding lower limits to supply voltage and redundancy. A brief discussion on how the circuit may be exploited as a basic building block for future defect tolerant mixed signal circuits, as well as neural networks, exploiting redundancy, is included. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relevance-Based Feature Extraction for Hyperspectral Images

    Publication Year: 2008 , Page(s): 658 - 672
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1136 KB) |  | HTML iconHTML  

    Hyperspectral imagery affords researchers all discriminating details needed for fine delineation of many material classes. This delineation is essential for scientific research ranging from geologic to environmental impact studies. In a data mining scenario, one cannot blindly discard information because it can destroy discovery potential. In a supervised classification scenario, however, the preselection of classes presents one with an opportunity to extract a reduced set of meaningful features without degrading classification performance. Given the complex correlations found in hyperspectral data and the potentially large number of classes, meaningful feature extraction is a difficult task. We turn to the recent neural paradigm of generalized relevance learning vector quantization (GRLVQ) [B. Hammer and T. Villmann, Neural Networks, vol. 15, pp. 1059-1068, 2002], which is based on, and substantially extends, learning vector quantization (LVQ) [T. Kohonen, Self-Organizing Maps, Berlin, Germany: Springer-Verlag, 2001] by learning relevant input dimensions while incorporating classification accuracy in the cost function. By addressing deficiencies in GRLVQ, we produce an improved version, GRLVQI, which is an effective analysis tool for high-dimensional data such as remotely sensed hyperspectral data. With an independent classifier, we show that the spectral features deemed relevant by our improved GRLVQI result in a better classification for a predefined set of surface materials than using all available spectral channels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Output Feedback Stabilization for Time-Delay Nonlinear Interconnected Systems Using Neural Networks

    Publication Year: 2008 , Page(s): 673 - 688
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (505 KB) |  | HTML iconHTML  

    In this paper, dynamic output feedback control problem is investigated for a class of nonlinear interconnected systems with time delays. Decentralized observer independent of the time delays is first designed. Then, we employ the bounds information of uncertain interconnections to construct the decentralized output feedback controller via backstepping design method. Based on Lyapunov stability theory, we show that the designed controller can render the closed-loop system asymptotically stable with the help of the changing supplying function idea. Furthermore, the corresponding decentralized control problem is considered under the case that the bounds of uncertain interconnections are not precisely known. By employing the neural network approximation theory, we construct the neural network output feedback controller with corresponding adaptive law. The resulting closed-loop system is stable in the sense of semiglobal boundedness. The observers and controllers constructed in this paper are independent of the time delays. Finally, simulations are done to verify the effectiveness of the theoretic results obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PSECMAC: A Novel Self-Organizing Multiresolution Associative Memory Architecture

    Publication Year: 2008 , Page(s): 689 - 712
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1446 KB) |  | HTML iconHTML  

    The cerebellum constitutes a vital part of the human brain system that possesses the capability to model highly nonlinear physical dynamics. The cerebellar model articulation controller (CMAC) associative memory network is a computational model inspired by the neurophysiological properties of the cerebellum, and it has been widely used for control, optimization, and various pattern recognition tasks. However, the CMAC network's highly regularized computing structure often leads to the following: (1) a suboptimal modeling accuracy, (2) poor memory utilization, and (3) the generalization-accuracy dilemma. Previous attempts to address these shortcomings have limited success and the proposed solutions often introduce a high operational complexity to the CMAC network. This paper presents a novel neurophysiologically inspired associative memory architecture named pseudo-self-evolving CMAC (PSECMAC) that nonuniformly allocates its computing cells to overcome the architectural deficiencies encountered by the CMAC network. The nonuniform memory allocation scheme employed by the proposed PSECMAC network is inspired by the cerebellar experience-driven synaptic plasticity phenomenon observed in the cerebellum, where significantly higher densities of synaptic connections are located in the frequently accessed regions. In the PSECMAC network, this biological synaptic plasticity phenomenon is emulated by employing a data-driven adaptive memory quantization scheme that defines its computing structure. A neighborhood-based activation process is subsequently implemented to facilitate the learning and computation of the PSECMAC structure. The training stability of the PSECMAC network is theoretically assured by the proof of its learning convergence, which will be presented in this paper. The performance of the proposed network is subsequently bench- marked against the CMAC network and several representative CMAC variants on three real-life applications, namely, pricing of currency fu- - tures option, banking failure classification, and modeling of the glucose-insulin dynamics of the human glucose metabolic process. The experimental results have strongly demonstrated the effectiveness of the PSECMAC network in addressing the architectural deficiencies of the CMAC network by achieving significant improvements in the memory utilization, output accuracy as well as the generalization capability of the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model

    Publication Year: 2008 , Page(s): 713 - 722
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (494 KB) |  | HTML iconHTML  

    Previous work on statistical language modeling has shown that it is possible to train a feedforward neural network to approximate probabilities over sequences of words, resulting in significant error reduction when compared to standard baseline models based on n-grams. However, training the neural network model with the maximum-likelihood criterion requires computations proportional to the number of words in the vocabulary. In this paper, we introduce adaptive importance sampling as a way to accelerate training of the model. The idea is to use an adaptive n-gram model to track the conditional distributions produced by the neural network. We show that a very significant speedup can be obtained on standard problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Note on the Bias in SVMs for Multiclassification

    Publication Year: 2008 , Page(s): 723 - 725
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (173 KB) |  | HTML iconHTML  

    During the usual SVM biclassification learning process, the bias is chosen a posteriori as the value halfway between separating hyperplanes. A note on different approaches on the calculation of the bias when SVM is used for multiclassification is provided and empirical experimentation is carried out which shows that the accuracy rate can be improved by using bias formulations, although no single formulation stands out as providing better performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Further Results on Delay-Dependent Stability Criteria of Neural Networks With Time-Varying Delays

    Publication Year: 2008 , Page(s): 726 - 730
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (199 KB) |  | HTML iconHTML  

    In this brief paper, an augmented Lyapunov functional, which takes an integral term of state vector into account, is introduced. Owing to the functional, an improved delay-dependent asymptotic stability criterion for delayed neural networks (NNs) is derived in term of linear matrix inequalities (LMIs). It is shown that the obtained criterion can provide less conservative result than some existing ones. When linear fractional uncertainties appear in NNs, a new robust delay-dependent stability condition is also given. Numerical examples are given to demonstrate the applicability of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Approximation Based Control: Unifying Neural, Fuzzy and Traditional Adaptive Approximation Approaches (Farrell, J.A. and Polycarpou, M.M. [Book review]

    Publication Year: 2008 , Page(s): 731 - 732
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • 2008 IEEE World Congress on Computational Intelligence

    Publication Year: 2008 , Page(s): 733
    Save to Project icon | Request Permissions | PDF file iconPDF (630 KB)  
    Freely Available from IEEE
  • Have you visited lately? www.ieee.org [advertisement]

    Publication Year: 2008 , Page(s): 734
    Save to Project icon | Request Permissions | PDF file iconPDF (225 KB)  
    Freely Available from IEEE
  • Quality without compromise [advertisement]

    Publication Year: 2008 , Page(s): 735
    Save to Project icon | Request Permissions | PDF file iconPDF (324 KB)  
    Freely Available from IEEE
  • Order form for reprints

    Publication Year: 2008 , Page(s): 736
    Save to Project icon | Request Permissions | PDF file iconPDF (353 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2008 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope