Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Neural Networks, IEEE Transactions on

Issue 10 • Date Oct. 2009

Filter Results

Displaying Results 1 - 21 of 21
  • Table of contents

    Publication Year: 2009 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2009 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • When Does Online BP Training Converge?

    Publication Year: 2009 , Page(s): 1529 - 1539
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (438 KB) |  | HTML iconHTML  

    The backpropogation (BP) neural networks have been widely applied in scientific research and engineering. The success of the application, however, relies upon the convergence of the training procedure involved in the neural network learning. We settle down the convergence analysis issue through proving two fundamental theorems on the convergence of the online BP training procedure. One theorem claims that under mild conditions, the gradient sequence of the error function will converge to zero (the weak convergence), and another theorem concludes the convergence of the weight sequence defined by the procedure to a fixed value at which the error function attains its minimum (the strong convergence). The weak convergence theorem sharpens and generalizes the existing convergence analysis conducted before, while the strong convergence theorem provides new analysis results on convergence of the online BP training procedure. The results obtained reveal that with any analytic sigmoid activation function, the online BP training procedure is always convergent, which then underlies successful application of the BP neural networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and Modeling of Naturalness in Handwritten Characters

    Publication Year: 2009 , Page(s): 1540 - 1553
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1271 KB) |  | HTML iconHTML  

    In this paper, we define the naturalness of handwritten characters as being the difference between the strokes of the handwritten characters and the archetypal fonts on which they are based. With this definition, we mathematically analyze the relationship between the font and its naturalness using canonical correlation analysis (CCA), multiple linear regression analysis, feedforward neural networks (FFNNs) with sliding windows, and recurrent neural networks (RNNs). This analysis reveals that certain properties of font character strokes do not have a linear relationship with their naturalness. In turn, this suggests that nonlinear techniques should be used to model the naturalness, and in our investigations, we find that an RNN with a recurrent output layer performs the best among four linear and nonlinear models. These results indicate that it is possible to model naturalness, defined in our study as the difference between handwritten and archetypal font characters but more generally as the difference between the behavior of a natural system and a corresponding basic system, and that naturalness learning is a promising approach for generating handwritten characters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Privacy-Preserving Backpropagation Neural Network Learning

    Publication Year: 2009 , Page(s): 1554 - 1564
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate Estimation of ICA Weight Matrix by Implicit Constraint Imposition Using Lie Group

    Publication Year: 2009 , Page(s): 1565 - 1580
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3815 KB) |  | HTML iconHTML  

    This paper presents a new stochastic algorithm to optimize the independence criterion-mutual information-among multivariate data using local, global, and hybrid optimizers, in conjunction with techniques involving a Lie group and its corresponding Lie algebra, for implicit imposition of the orthonormality constraint among the estimated sources. The major advantage of the proposed algorithm is the increased accuracy with which the weight matrix in the independent component analysis (ICA) model is estimated, compared to conventional schemes. When the local optimizer with Lie group techniques and the fast fixed-point (fastICA) algorithm were experimented by inputting the same set of random vectors, the former method superseded the conventional one by producing accurate weight matrix estimates in a majority of the test cases. Importantly, in our approach, the use of a Lie group to ldquolockrdquo the weight matrix estimates onto the constraint surface enabled easy realization of the hybrid optimizers to yield near-global-optimum solutions consistently in most of the test cases, compared to well-known global optimizers. The inherent computational overhead in the hybrid optimizers was lowered by preprocessing the input data and periodically integrating the local optimizers with the global one. The proposed algorithms were applied to six-dimensional multispectral satellite image data to emphasize their usefulness in terms of accurate ICA weight matrix estimation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multistability and New Attraction Basins of Almost-Periodic Solutions of Delayed Neural Networks

    Publication Year: 2009 , Page(s): 1581 - 1593
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (866 KB) |  | HTML iconHTML  

    In this paper, we investigate multistability of almost-periodic solutions of recurrently connected neural networks with delays (simply called delayed neural networks). We will reveal that under some conditions, the space Rn can be divided into 2n subsets, and in each subset, the delayed n -neuron neural network has a locally stable almost-periodic solution. Furthermore, we also investigate the attraction basins of these almost-periodic solutions. We reveal that the attraction basin of almost-periodic trajectory is larger than the subset, where the corresponding almost-periodic trajectory is located. In addition, several numerical simulations are presented to corroborate the theoretical results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semisupervised Multicategory Classification With Imperfect Model

    Publication Year: 2009 , Page(s): 1594 - 1603
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    Semisupervised learning has been of growing interest over the past years and many methods have been proposed. While existing semisupervised methods have shown some promising empirical performances, their development has been based largely on heuristics. In this paper, we investigate semisupervised multicategory classification with an imperfect mixture density model. In the proposed model, the training data come from a probability distribution, which can be modeled imperfectly by an identifiable mixture distribution. Furthermore, we propose a semisupervised multicategory classification method and establish its generalization error bounds. The theoretical analysis illustrates that the proposed method can utilize unlabeled data effectively and can achieve fast convergence rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Granular Neural Networks and Their Development Through Context-Based Clustering and Adjustable Dimensionality of Receptive Fields

    Publication Year: 2009 , Page(s): 1604 - 1616
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1493 KB) |  | HTML iconHTML  

    In this study, we present a new architecture of a granular neural network and provide a comprehensive design methodology as well as elaborate on an algorithmic setup supporting its development. The proposed neural network relates to a broad category of radial basis function neural networks (RBFNNs) in the sense that its topology involves a collection of receptive fields. In contrast to the standard architectures encountered in RBFNNs, here we form individual receptive fields in subspaces of the original input space rather than in the entire input space. These subspaces could be different for different receptive fields. The architecture of the network is fully reflective of the structure encountered in the training data which are granulated with the aid of clustering techniques. More specifically, the output space is granulated with use of K-means clustering while the information granules in the multidimensional input space are formed by using the so-called context-based fuzzy C-means, which takes into account the structure being already formed in the output space. The innovative development facet of the network involves a dynamic reduction of dimensionality of the input space in which the information granules are formed in the subspace of the overall input space which is formed by selecting a suitable subset of input variables so that this subspace retains the structure of the entire space. As this search is of combinatorial character, we use the technique of genetic optimization [genetic algorithms (GAs), to be more specific] to determine the optimal input subspaces. A series of numeric studies exploiting synthetic data and data coming from the Machine Learning Repository, University of California at Irvine, provide a detailed insight into the nature of the algorithm and its parameters as well as offer some comparative analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pinning Stabilization of Linearly Coupled Stochastic Neural Networks via Minimum Number of Controllers

    Publication Year: 2009 , Page(s): 1617 - 1629
    Cited by:  Papers (43)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (693 KB) |  | HTML iconHTML  

    Pinning stabilization problem of linearly coupled stochastic neural networks (LCSNNs) is studied in this paper. A minimum number of controllers are used to force the LCSNNs to the desired equilibrium point by fully utilizing the structure of the network. In order to pinning control the LCSNNs to a certain desired state, only one controller is required for strongly connected network topology, and m controllers, which will be shown to be the minimum number, are needed for LCSNNs with m -reducible coupling matrix. The isolate node of the LCSNNs can be stable, periodic, or even chaotic. The coupling Laplacian matrix of the LCSNNs can be symmetric irreducible, asymmetric irreducible, or m-reducible, which means that the network topology can be strongly connected, weakly connected, or even unconnected. There is no constraint on the network topology. Some criteria are derived to judge whether the LCSNNs can be controlled in mean square by using designed controllers. The given criteria are expressed in terms of strict linear matrix inequalities, which can be easily checked by resorting to recently developed algorithm. Moreover, numerical examples including small-world and scale-free networks are also given to demonstrate that our theoretical results are valid and efficient for large systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Control Design for Nonlinear Distributed Parameter Systems With Persistent Bounded Disturbances

    Publication Year: 2009 , Page(s): 1630 - 1644
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (995 KB) |  | HTML iconHTML  

    In this paper, an adaptive neural network (NN) control with a guaranteed L infin-gain performance is proposed for a class of parabolic partial differential equation (PDE) systems with unknown nonlinearities and persistent bounded disturbances. Initially, Galerkin method is applied to the PDE system to derive a low-order ordinary differential equation (ODE) system that accurately describes the dynamics of the dominant (slow) modes of the PDE system. Subsequently, based on the low-order slow model and the Lyapunov technique, an adaptive modal feedback controller is developed such that the closed-loop slow system is semiglobally input-to-state practically stable (ISpS) with an L infin-gain performance. In the proposed control scheme, a radial basis function (RBF) NN is employed to approximate the unknown term in the derivative of the Lyapunov function due to the unknown system nonlinearities. The outcome of the adaptive L infin-gain control problem is formulated as a linear matrix inequality (LMI) problem. Moreover, by using the existing LMI optimization technique, a suboptimal controller is obtained in the sense of minimizing an upper bound of the L infin-gain, while control constraints are respected. Furthermore, it is shown that the proposed controller can ensure the semiglobal input-to-state practical stability and L infin-gain performance of the closed-loop PDE system. Finally, by applying the developed design method to the temperature profile control of a catalytic rod, the achieved simulation results show the effectiveness of the proposed controller. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Perceptron and DNA-Like Learning Algorithm for Binary Neural Networks: LSBF and PBF Implementations

    Publication Year: 2009 , Page(s): 1645 - 1658
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (986 KB) |  | HTML iconHTML  

    Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as and, or, and xor by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern Classification With Class Probability Output Network

    Publication Year: 2009 , Page(s): 1659 - 1673
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (737 KB) |  | HTML iconHTML  

    The output of a classifier is usually determined by the value of a discriminant function and a decision is made based on this output which does not necessarily represent the posterior probability for the soft decision of classification. In this context, it is desirable that the output of a classifier be calibrated in such a way to give the meaning of the posterior probability of class membership. This paper presents a new method of postprocessing for the probabilistic scaling of classifier's output. For this purpose, the output of a classifier is analyzed and the distribution of the output is described by the beta distribution parameters. For more accurate approximation of class output distribution, the beta distribution parameters as well as the kernel parameters describing the discriminant function are adjusted in such a way to improve the uniformity of beta cumulative distribution function (CDF) values for the given class output samples. As a result, the classifier with the proposed scaling method referred to as the class probability output network (CPON) can provide accurate posterior probabilities for the soft decision of classification. To show the effectiveness of the proposed method, the simulation for pattern classification using the support vector machine (SVM) classifiers is performed for the University of California at Irvine (UCI) data sets. The simulation results using the SVM classifiers with the proposed CPON demonstrated a statistically meaningful performance improvement over the SVM and SVM-related classifiers, and also other probabilistic scaling methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized Encoding and Decoding Operators for Lattice-Based Associative Memories

    Publication Year: 2009 , Page(s): 1674 - 1678
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    During the 1990s, Ritter introduced a new family of associative memories based on lattice algebra instead of linear algebra. These memories provide unlimited storage capacity, unlike linear-correlation-based models. The canonical lattice-based memories, however, are susceptible to noise in the initial input data. In this brief, we present novel methods of encoding and decoding lattice-based memories using two families of ordered weighted average (OWA) operators. The result is a greater robustness to distortion in the initial input data, and a greater understanding of the effect of the choice of encoding and decoding operators on the behavior of the system, with the tradeoff that the time complexity for encoding is increased. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifying the Topology of a Coupled FitzHugh–Nagumo Neurobiological Network via a Pinning Mechanism

    Publication Year: 2009 , Page(s): 1679 - 1684
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (551 KB) |  | HTML iconHTML  

    Topology identification of a network has received great interest for the reason that the study on many key properties of a network assumes a special known topology. Different from recent similar works in which the evolution of all the nodes in a complex network need to be received, this brief presents a novel criterion to identify the topology of a coupled FitzHugh-Nagumo (FHN) neurobiological network by receiving the membrane potentials of only a fraction of the neurons. Meanwhile, although incomplete information is received, the evolution of all the neurons including membrane potentials and recovery variables are traced. Based on Schur complement and Lyapunov stability theory, the exact weight configuration matrix can be estimated by a simple adaptive feedback control. The effectiveness of the proposed approach is successfully verified by neural networks with fixed and switching topologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2010 IEEE World Congress on Computational Intelligence (WCCI)

    Publication Year: 2009 , Page(s): 1685
    Save to Project icon | Request Permissions | PDF file iconPDF (754 KB)  
    Freely Available from IEEE
  • White box nonlinear prediction models

    Publication Year: 2009 , Page(s): 1686
    Save to Project icon | Request Permissions | PDF file iconPDF (151 KB)  
    Freely Available from IEEE
  • Access over 1 million articles - The IEEE Digital Library [advertisement]

    Publication Year: 2009 , Page(s): 1687
    Save to Project icon | Request Permissions | PDF file iconPDF (370 KB)  
    Freely Available from IEEE
  • Why we joined ... [advertisement]

    Publication Year: 2009 , Page(s): 1688
    Save to Project icon | Request Permissions | PDF file iconPDF (205 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2009 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks Information for authors

    Publication Year: 2009 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope