By Topic

IEEE Transactions on Neural Networks

Issue 4 • July 2004

Filter Results

Displaying Results 1 - 22 of 22
  • Table of contents

    Publication Year: 2004, Page(s): c1
    Request permission for commercial reuse | PDF file iconPDF (36 KB)
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2004, Page(s): c2
    Request permission for commercial reuse | PDF file iconPDF (36 KB)
    Freely Available from IEEE
  • Guest Editorial Special Issue on Information Theoretic Learning

    Publication Year: 2004, Page(s):789 - 791
    Cited by:  Papers (1)
    Request permission for commercial reuse | PDF file iconPDF (126 KB) | HTML iconHTML
    Freely Available from IEEE
  • A new criterion using information gain for action selection strategy in reinforcement learning

    Publication Year: 2004, Page(s):792 - 799
    Cited by:  Papers (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (276 KB) | HTML iconHTML

    In this paper, we regard the sequence of returns as outputs from a parametric compound source. Utilizing the fact that the coding rate of the source shows the amount of information about the return, we describe ℓ-learning algorithms based on the predictive coding idea for estimating an expected information gain concerning future information and give a convergence proof of the information gai... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variational learning and bits-back coding: an information-theoretic view to Bayesian learning

    Publication Year: 2004, Page(s):800 - 810
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (308 KB) | HTML iconHTML

    The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesia... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive probabilistic neural networks for pattern classification in time-varying environment

    Publication Year: 2004, Page(s):811 - 827
    Cited by:  Papers (86)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (540 KB) | HTML iconHTML

    In this paper, we propose a new class of probabilistic neural networks (PNNs) working in nonstationary environment. The novelty is summarized as follows: 1) We formulate the problem of pattern classification in nonstationary environment as the prediction problem and design a probabilistic neural network to classify patterns having time-varying probability distributions. We note that the problem of... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gradient-based manipulation of nonparametric entropy estimates

    Publication Year: 2004, Page(s):828 - 837
    Cited by:  Papers (17)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (302 KB) | HTML iconHTML

    This paper derives a family of differential learning rules that optimize the Shannon entropy at the output of an adaptive system via kernel density estimation. In contrast to parametric formulations of entropy, this nonparametric approach assumes no particular functional form of the output density. We address problems associated with quantized data and finite sample size, and implement efficient m... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic sequential independent components analysis

    Publication Year: 2004, Page(s):838 - 849
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (379 KB) | HTML iconHTML

    Under-complete models, which derive lower dimensional representations of input data, are valuable in domains in which the number of input dimensions is very large, such as data consisting of a temporal sequence of images. This paper presents the under-complete product of experts (UPoE), where each expert models a one-dimensional projection of the data. Maximum-likelihood learning rules for this mo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Entropy-based kernel mixture modeling for topographic map formation

    Publication Year: 2004, Page(s):850 - 858
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (279 KB) | HTML iconHTML

    A new information-theoretic learning algorithm for kernel-based topographic map formation is introduced. In the one-dimensional case, the algorithm is aimed at uniformizing the cumulative distribution of the kernel mixture densities by maximizing its differential entropy. A nonparametric differential entropy estimator is used on which normalized gradient ascent is performed. Both differentiable an... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • From blind signal extraction to blind instantaneous signal separation: criteria, algorithms, and stability

    Publication Year: 2004, Page(s):859 - 873
    Cited by:  Papers (57)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (578 KB) | HTML iconHTML

    This paper reports a study on the problem of the blind simultaneous extraction of specific groups of independent components from a linear mixture. This paper first presents a general overview and unification of several information theoretic criteria for the extraction of a single independent component. Then, our contribution fills the theoretical gap that exists between extraction and separation b... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advanced search algorithms for information-theoretic learning with kernel-based estimators

    Publication Year: 2004, Page(s):874 - 884
    Cited by:  Papers (27)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (519 KB) | HTML iconHTML

    Recent publications have proposed various information-theoretic learning (ITL) criteria based on Renyi's quadratic entropy with nonparametric kernel-based density estimation as alternative performance metrics for both supervised and unsupervised adaptive system training. These metrics, based on entropy and mutual information, take into account higher order statistics unlike the mean-square error (... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advances on BYY harmony learning: information theoretic perspective, generalized projection geometry, and independent factor autodetermination

    Publication Year: 2004, Page(s):885 - 902
    Cited by:  Papers (26)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (726 KB) | HTML iconHTML

    The nature of Bayesian Ying-Yang harmony learning is reexamined from an information theoretic perspective. Not only its ability for model selection and regularization is explained with new insights, but also discussions are made on its relations and differences from the studies of minimum description length (MDL), Bayesian approach, the bit-back based MDL, Akaike information criterion (AIC), maxim... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning mixture models with the regularized latent maximum entropy principle

    Publication Year: 2004, Page(s):903 - 916
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (401 KB) | HTML iconHTML

    This paper presents a new approach to estimating mixture models based on a recent inference principle we have proposed: the latent maximum entropy principle (LME). LME is different from Jaynes' maximum entropy principle, standard maximum likelihood, and maximum a posteriori probability estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show ho... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new information processing measure for adaptive complex systems

    Publication Year: 2004, Page(s):917 - 927
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (408 KB) | HTML iconHTML

    This paper presents an implementation-independent measure of the amount of information processing performed by (part of) an adaptive system which depends on the goal to be performed by the overall system. This new measure gives rise to a theoretical framework under which several classical supervised and unsupervised learning algorithms fall and, additionally, new efficient learning algorithms can ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A negentropy minimization approach to adaptive equalization for digital communication systems

    Publication Year: 2004, Page(s):928 - 936
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (412 KB) | HTML iconHTML

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE).... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feature selection in MLPs and SVMs based on maximum output information

    Publication Year: 2004, Page(s):937 - 948
    Cited by:  Papers (48)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (352 KB) | HTML iconHTML

    This paper presents feature selection algorithms for multilayer perceptrons (MLPs) and multiclass support vector machines (SVMs), using mutual information between class labels and classifier outputs, as an objective function. This objective function involves inexpensive computation of information measures only on discrete variables; provides immunity to prior class probabilities; and brackets the ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on NanoBioscience

    Publication Year: 2004, Page(s): 949
    Request permission for commercial reuse | PDF file iconPDF (297 KB)
    Freely Available from IEEE
  • Explore IEL IEEE's most comprehensive resource [advertisement]

    Publication Year: 2004, Page(s): 950
    Request permission for commercial reuse | PDF file iconPDF (341 KB)
    Freely Available from IEEE
  • IEEE Member Digital Library [advertisement]

    Publication Year: 2004, Page(s): 951
    Request permission for commercial reuse | PDF file iconPDF (179 KB)
    Freely Available from IEEE
  • 2004 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology

    Publication Year: 2004, Page(s): 952
    Request permission for commercial reuse | PDF file iconPDF (422 KB)
    Freely Available from IEEE
  • IEEE Neural Networks Society Information

    Publication Year: 2004, Page(s): c3
    Request permission for commercial reuse | PDF file iconPDF (31 KB)
    Freely Available from IEEE
  • Blank page [back cover]

    Publication Year: 2004, Page(s): c4
    Request permission for commercial reuse | PDF file iconPDF (2 KB)
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope