By Topic

Neural Networks, IEEE Transactions on

Issue 2 • Date Feb. 2009

Filter Results

Displaying Results 1 - 16 of 16
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Normalized Mutual Information Feature Selection

    Page(s): 189 - 201
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (864 KB) |  | HTML iconHTML  

    A filter method of feature selection based on mutual information, called normalized mutual information feature selection (NMIFS), is presented. NMIFS is an enhancement over Battiti's MIFS, MIFS-U, and mRMR methods. The average normalized mutual information is proposed as a measure of redundancy among features. NMIFS outperformed MIFS, MIFS-U, and mRMR on several artificial and benchmark data sets without requiring a user-defined parameter. In addition, NMIFS is combined with a genetic algorithm to form a hybrid filter/wrapper method called GAMIFS. This includes an initialization procedure and a mutation operator based on NMIFS to speed up the convergence of the genetic algorithm. GAMIFS overcomes the limitations of incremental search algorithms that are unable to find dependencies between groups of features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Anticipation via Spiking Networks: Application to Navigation Control

    Page(s): 202 - 216
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1753 KB) |  | HTML iconHTML  

    In this paper, we introduce a network of spiking neurons devoted to navigation control. Three different examples, dealing with stimuli of increasing complexity, are investigated. In the first one, obstacle avoidance in a simulated robot is achieved through a network of spiking neurons. In the second example, a second layer is designed aiming to provide the robot with a target approaching system, making it able to move towards visual targets. Finally, a network of spiking neurons for navigation based on visual cues is introduced. In all cases, the robot was assumed to rely on some a priori known responses to low-level sensors (i.e., to contact sensors in the case of obstacles, to proximity target sensors in the case of visual targets, or to the visual target for navigation with visual cues). Based on their knowledge, the robot has to learn the response to high-level stimuli (i.e., range finder sensors or visual input). The biologically plausible paradigm of spike-timing-dependent plasticity (STDP) is included in the network to make the system able to learn high-level responses that guide navigation through a simple unstructured environment. The learning procedure is based on classical conditioning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discriminant Nonnegative Tensor Factorization Algorithms

    Page(s): 217 - 235
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (642 KB) |  | HTML iconHTML  

    Nonnegative matrix factorization (NMF) has proven to be very successful for image analysis, especially for object representation and recognition. NMF requires the object tensor (with valence more than one) to be vectorized. This procedure may result in information loss since the local object structure is lost due to vectorization. Recently, in order to remedy this disadvantage of NMF methods, nonnegative tensor factorizations (NTF) algorithms that can be applied directly to the tensor representation of object collections have been introduced. In this paper, we propose a series of unsupervised and supervised NTF methods. That is, we extend several NMF methods using arbitrary valence tensors. Moreover, by incorporating discriminant constraints inside the NTF decompositions, we present a series of discriminant NTF methods. The proposed approaches are tested for face verification and facial expression recognition, where it is shown that they outperform other popular subspace approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Network Tracking Control With Disturbance Attenuation for Multiple-Input Nonlinear Systems

    Page(s): 236 - 247
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    A switching adaptive neural network controller for multiple-input nonlinear, affine in the control dynamical systems with unknown nonlinearities is designed, capable of arbitrarily attenuating L 2 or L infin external disturbances. In the absence of disturbances, a uniform ultimate boundedness property of the tracking error with respect to an arbitrarily small set around the origin is guaranteed, as well as the uniform boundedness of all the signals in the closed loop. The proposed switching adaptive controller effectively avoids possible division by zero, while guaranteeing the continuity of switching. In this way, problems connected to existence of solutions and chattering phenomena are alleviated. Simulations illustrate the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ICA Color Space for Pattern Recognition

    Page(s): 248 - 257
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1640 KB) |  | HTML iconHTML  

    This paper presents a novel independent component analysis (ICA) color space method for pattern recognition. The novelty of the ICA color space method is twofold: 1) deriving effective color image representation based on ICA, and 2) implementing efficient color image classification using the independent color image representation and an enhanced Fisher model (EFM). First, the ICA color space method assumes that each color image is defined by three independent source images, which can be derived by means of a blind source separation procedure, such as ICA. Unlike the RGB color space, where the R , G, and B component images are correlated, the new ICA color space method derives three component images C 1 , C 2 , and C 3 that are independent and hence uncorrelated. Second, the three independent color component images are concatenated to form an augmented pattern vector, whose dimensionality is reduced by principal component analysis (PCA). An EFM then derives the discriminating features of the reduced pattern vector for pattern recognition. The effectiveness of the proposed ICA color space method is demonstrated using a complex grand challenge pattern recognition problem and a large scale database. In particular, the face recognition grand challenge (FRGC) and the biometric experimentation environment (BEE) reveal that for the most challenging FRGC version 2 Experiment 4, which contains 12 776 training images, 16 028 controlled target images, and 8014 uncontrolled query images, the ICA color space method achieves the face verification rate (ROC III) of 73.69% at the false accept rate (FAR) of 0.1%, compared to the face verification rate (FVR) of 67.13% of the RGB color space (using the same EFM) and 11.86% of the FRGC baseline algorithm at the same FAR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constructing Ensembles of Classifiers by Means of Weighted Instance Selection

    Page(s): 258 - 277
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1345 KB) |  | HTML iconHTML  

    In this paper, we approach the problem of constructing ensembles of classifiers from the point of view of instance selection. Instance selection is aimed at obtaining a subset of the instances available for training capable of achieving, at least, the same performance as the whole training set. In this way, instance selection algorithms try to keep the performance of the classifiers while reducing the number of instances in the training set. Meanwhile, boosting methods construct an ensemble of classifiers iteratively focusing each new member on the most difficult instances by means of a biased distribution of the training instances. In this work, we show how these two methodologies can be combined advantageously. We can use instance selection algorithms for boosting using as objective to optimize the training error weighted by the biased distribution of the instances given by the boosting method. Our method can be considered as boosting by instance selection. Instance selection has mostly been developed and used for k -nearest neighbor (k -NN) classifiers. So, as a first step, our methodology is suited to construct ensembles of k -NN classifiers. Constructing ensembles of classifiers by means of instance selection has the important feature of reducing the space complexity of the final ensemble as only a subset of the instances is selected for each classifier. However, the methodology is not restricted to k-NN classifier. Other classifiers, such as decision trees and support vector machines (SVMs), may also benefit from a smaller training set, as they produce simpler classifiers if an instance selection algorithm is performed before training. In the experimental section, we show that the proposed approach is able to produce better and simpler ensembles than random subspace method (RSM) method for k-NN and standard ensemble methods for C4.5 and SVMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Without Human Expertise: A Case Study of the Double Dummy Bridge Problem

    Page(s): 278 - 299
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1096 KB) |  | HTML iconHTML  

    Artificial neural networks, trained only on sample deals, without presentation of any human knowledge or even rules of the game, are used to estimate the number of tricks to be taken by one pair of bridge players in the so-called double dummy bridge problem (DDBP). Four representations of a deal in the input layer were tested leading to significant differences in achieved results. In order to test networks' abilities to extract knowledge from sample deals, experiments with additional inputs representing estimators of hand's strength used by humans were also performed. The superior network trained solely on sample deals outperformed all other architectures, including those using explicit human knowledge of the game of bridge. Considering the suit contracts, this network, in a sample of 100 000 testing deals, output a perfect answer in 53.11% of the cases and only in 3.52% of them was mistaken by more than one trick. The respective figures for notrump contracts were equal to 37.80% and 16.36%. The above results were compared with the ones obtained by 24 professional human bridge players-members of The Polish Bridge Union-on test sets of sizes between 27 and 864 deals per player (depending on player's time availability). In case of suit contracts, the perfect answer was obtained in 53.06% of the testing deals for ten upper-classified players and in 48.66% of them, for the remaining 14 participants of the experiment. For the notrump contracts, the respective figures were equal to 73.68% and 60.78%. Except for checking the ability of neural networks in solving the DDBP, the other goal of this research was to analyze connection weights in trained networks in a quest for weights' patterns that are explainable by experienced human bridge players. Quite surprisingly, several such patterns were discovered (e.g., preference for groups of honors, drawing special attention to Aces, favoring cards from a trump suit, gradual importance of cards in one suit- - -from two to the Ace, etc.). Both the numerical figures and weight patterns are stable and repeatable in a sample of neural architectures (differing only by randomly chosen initial weights). In summary, the piece of research described in this paper provides a detailed comparison between various data representations of the DDBP solved by neural networks. On a more general note, this approach can be extended to a certain class of binary classification problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear Dimensionality Reduction by Locally Linear Inlaying

    Page(s): 300 - 315
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2795 KB) |  | HTML iconHTML  

    High-dimensional data is involved in many fields of information processing. However, sometimes, the intrinsic structures of these data can be described by a few degrees of freedom. To discover these degrees of freedom or the low-dimensional nonlinear manifold underlying a high-dimensional space, many manifold learning algorithms have been proposed. Here we describe a novel algorithm, locally linear inlaying (LLI), which combines simple geometric intuitions and rigorously established optimality to compute the global embedding of a nonlinear manifold. Using a divide-and-conquer strategy, LLI gains some advantages in itself. First, its time complexity is linear in the number of data points, and hence LLI can be implemented efficiently. Second, LLI overcomes problems caused by the nonuniform sample distribution. Third, unlike existing algorithms such as isometric feature mapping (Isomap), local tangent space alignment (LTSA), and locally linear coordination (LLC), LLI is robust to noise. In addition, to evaluate the embedding results quantitatively, two criteria based on information theory and Kolmogorov complexity theory, respectively, are proposed. Furthermore, we demonstrated the efficiency and effectiveness of our proposal by synthetic and real-world data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognition of Abstract Objects Via Neural Oscillators: Interaction Among Topological Organization, Associative Memory and Gamma Band Synchronization

    Page(s): 316 - 335
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4614 KB) |  | HTML iconHTML  

    Synchronization of neural activity in the gamma band is assumed to play a significant role not only in perceptual processing, but also in higher cognitive functions. Here, we propose a neural network of Wilson-Cowan oscillators to simulate recognition of abstract objects, each represented as a collection of four features. Features are ordered in topological maps of oscillators connected via excitatory lateral synapses, to implement a similarity principle. Experience on previous objects is stored in long-range synapses connecting the different topological maps, and trained via timing dependent Hebbian learning (previous knowledge principle). Finally, a downstream decision network detects the presence of a reliable object representation, when all features are oscillating in synchrony. Simulations performed giving various simultaneous objects to the network (from 1 to 4), with some missing and/or modified properties suggest that the network can reconstruct objects, and segment them from the other simultaneously present objects, even in case of deteriorated information, noise, and moderate correlation among the inputs (one common feature). The balance between sensitivity and specificity depends on the strength of the Hebbian learning. Achieving a correct reconstruction in all cases, however, requires ad hoc selection of the oscillation frequency. The model represents an attempt to investigate the interactions among topological maps, autoassociative memory, and gamma-band synchronization, for recognition of abstract objects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sampled-Data Adaptive NN Tracking Control of Uncertain Nonlinear Systems

    Page(s): 336 - 355
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1161 KB) |  | HTML iconHTML  

    In this paper, for a class of single-input-single-output (SISO) uncertain nonlinear systems, adaptive neural tracking controllers designed for digital computer implementation are proposed. The overall scheme can be considered as a sampled-data adaptive neural control system. As an intermediate result, it is proven that, for a sufficiently small sampling period, the emulated adaptive neural controller i.e., the discrete implementation of the continuous-time adaptive neural network controller ensures semiglobal uniformly ultimate boundedness of the closed-loop system. Then, based on the exact discrete-time model, a controller redesign is proposed that performs efficiently for sampling periods for which the emulation controller fails. The redesigned controller consists of two terms: the emulated control law and an extra robustness term designed to increase the order of the perturbation (with respect to the sampling period) in the Lyapunov difference. In all cases, high-order neural networks are employed to approximate the unknown nonlinearities. Using Lyapunov techniques, it is proven that, for a sufficiently small sampling period, the proposed redesigned controller ensures the (semiglobal) boundedness of all the signals in the closed-loop while the output of the system converges to a small neighborhood of the desired trajectory. Simulation results illustrate the superiority of the proposed scheme with respect to the emulation controller and verify the theoretical analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Neural Network Motion Tracking Control of Piezoelectric Actuation Systems for Micro/Nanomanipulation

    Page(s): 356 - 367
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (680 KB) |  | HTML iconHTML  

    This paper presents a robust neural network motion tracking control methodology for piezoelectric actuation systems employed in micro/nanomanipulation. This control methodology is proposed for tracking of desired motion trajectories in the presence of unknown system parameters, nonlinearities including the hysteresis effect and external disturbances in the control systems. In this paper, the related control issues are investigated, and a control methodology is established including the neural networks and a sliding control scheme. In particular, the radial basis function (RBF) neural networks are chosen for function approximations. The stability of the closed-loop system, as well as the convergence of the position and velocity tracking errors to zero, is assured by the control methodology in the presence of the aforementioned conditions. An offline learning procedure is also proposed for the improvement of the motion tracking performance. Precise tracking results of the proposed control methodology for a desired motion trajectory are demonstrated in the experimental study. With such a motion tracking capability, the proposed control methodology promises the realization of high-performance piezoelectric actuated micro/nanomanipulation systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Representations of Continuous Attractors of Recurrent Neural Networks

    Page(s): 368 - 372
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    A continuous attractor of a recurrent neural network (RNN) is a set of connected stable equilibrium points. Continuous attractors have been used to describe the encoding of continuous stimuli in neural networks. Dynamic behaviors of continuous attractors of RNNs exhibit interesting properties. This brief desires to derive explicit representations of continuous attractors of RNNs. Representations of continuous attractors of linear RNNs as well as linear-threshold (LT) RNNs are obtained under some conditions. These representations could be looked at as solutions of continuous attractors of the networks. Such results provide clear and complete descriptions to the continuous attractors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Blank page [back cover]

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope