By Topic

Neural Networks, IEEE Transactions on

Issue 6 • Date Nov. 2003

Filter Results

Displaying Results 1 - 21 of 21
  • Simple model of spiking neurons

    Page(s): 1569 - 1572
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (403 KB)  

    A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global and partial synchronism in phase-locked loop networks

    Page(s): 1572 - 1575
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB)  

    We analytically investigate the existence of global and partial synchronism in neural networks where each node is represented by a phase oscillator. Partial synchronism, which is important to pattern recognition, can be caused by increasing the natural frequency of an oscillator and restricting the frequencies of others in certain ranges. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Support vector machine with adaptive parameters in financial time series forecasting

    Page(s): 1506 - 1518
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (558 KB) |  | HTML iconHTML  

    A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A high-performance feedback neural network for solving convex nonlinear programming problems

    Page(s): 1469 - 1477
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (719 KB) |  | HTML iconHTML  

    Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • No free lunch with the sandwich [sandwich estimator]

    Page(s): 1553 - 1559
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    In nonlinear regression theory, the sandwich estimator of the covariance matrix of the model parameters is known as a consistent estimator, even when the parameterized model does not contain the regression. However, in the latter case, we emphasize the fact that the consistency of the sandwich holds only if the inputs of the training set are the values of independent identically distributed random variables. Thus, in the frequent practical modeling situation involving a training set whose inputs are deliberately chosen and imposed by the designer, we question the opportunity to use the sandwich estimator rather than the simple estimator based on the inverse squared Jacobian. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Query-based learning for aerospace applications

    Page(s): 1437 - 1448
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB) |  | HTML iconHTML  

    Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study on reduced support vector machines

    Page(s): 1449 - 1459
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (645 KB) |  | HTML iconHTML  

    Recently the reduced support vector machine (RSVM) was proposed as an alternate of the standard SVM. Motivated by resolving the difficulty on handling large data sets using SVM with nonlinear kernels, it preselects a subset of data as support vectors and solves a smaller optimization problem. However, several issues of its practical use have not been fully discussed yet. For example, we do not know if it possesses comparable generalization ability as the standard SVM. In addition, we would like to see for how large problems RSVM outperforms SVM on training time. In this paper we show that the RSVM formulation is already in a form of linear SVM and discuss four RSVM implementations. Experiments indicate that in general the test accuracy of RSVM are a little lower than that of the standard SVM. In addition, for problems with up to tens of thousands of data, if the percentage of support vectors is not high, existing implementations for SVM is quite competitive on the training time. Thus, from this empirical study, RSVM will be mainly useful for either larger problems or those with many support vectors. Experiments in this paper also serve as comparisons of: 1) different implementations for linear SVM and 2) standard SVM using linear and quadratic cost functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for function approximation

    Page(s): 1478 - 1495
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1309 KB)  

    This paper presents a multiobjective evolutionary algorithm to optimize radial basis function neural networks (RBFNNs) in order to approach target functions from a set of input-output pairs. The procedure allows the application of heuristics to improve the solution of the problem at hand by including some new genetic operators in the evolutionary process. These new operators are based on two well-known matrix transformations: singular value decomposition (SVD) and orthogonal least squares (OLS), which have been used to define new mutation operators that produce local or global modifications in the radial basis functions (RBFs) of the networks (the individuals in the population in the evolutionary procedure). After analyzing the efficiency of the different operators, we have shown that the global mutation operators yield an improved procedure to adjust the parameters of the RBFNNs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability analysis of bidirectional associative memory networks with time delays

    Page(s): 1560 - 1565
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (391 KB) |  | HTML iconHTML  

    By using the method of Liapunov functional, a model for bidirectional associative memory networks with time delays is studied. The asymptotic stability is global in the state space of the neuronal activations and is also independent of the delays. Our results can be applied to a variety of situations that arise both in the field of biological and artificial neural networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Habituation in the KIII olfactory model with chemical sensor arrays

    Page(s): 1565 - 1568
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB) |  | HTML iconHTML  

    This paper presents a novel combination of chemical sensors and the KIII model for simulating mixture perception with a habituation process triggered by local activity. Stimuli are generated by partitioning feature space with labeled lines. Pattern completion is demonstrated through coherent oscillations across granule populations using experimental odor mixtures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning from labeled and unlabeled data using a minimal number of queries

    Page(s): 1496 - 1505
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    The considerable time and expense required for labeling data has prompted the development of algorithms which maximize the classification accuracy for a given amount of labeling effort. On the one hand, the effort has been to develop the so-called "active learning" algorithms which sequentially choose the patterns to be explicitly labeled so as to realize the maximum information gain from each labeling. On the other hand, the effort has been to develop algorithms that can learn from labeled as well as the more abundant unlabeled data. Proposed in this paper is an algorithm that integrates the benefits of active learning with the benefits of learning from labeled and unlabeled data. Our approach is based on reversing the roles of the labeled and unlabeled data. Specifically, we use a Genetic Algorithm (GA) to iteratively refine the class membership of the unlabeled patterns so that the maximum a posteriori (MAP) based predicted labels of the patterns in the labeled dataset are in agreement with the known labels. This reversal of the role of labeled and unlabeled patterns leads to an implicit class assignment of the unlabeled patterns. For active learning, we use a subset of the GA population to construct multiple MAP classifiers. Points in the input space where there is maximal disagreement amongst these classifiers are then selected for explicit labeling. The learning from labeled and unlabeled data and active learning phases are interlaced and together provide accurate classification while minimizing the labeling effort. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive self-organizing map for vector quantization of migratory signals and its application to mobile communications

    Page(s): 1532 - 1540
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (659 KB) |  | HTML iconHTML  

    This paper proposes a predictive self-organizing map (P-SOM) that performs an adaptive vector quantization of migratory time-sequential signals whose stochastic properties such as average values of signals in each cluster are varying continuously. The P-SOM possesses not only the weight corresponding to the signal values themselves but also those related to the time-derivative information. All the weights self-organize to predict appropriate future reference vectors. The prediction using the time-derivative weights enables the separation of continuously varying components form random noise components, resulting in a better performance of the adaptive vector quantization. That is to say, the stationary random noise components are captured by the ordinary weights, whereas the migrating components are captured by the first (and higher) order time-derivative ones. An application to a mobile communication receiver using quasi-coherent detection is presented. By utilizing both the ordinary and time-derivative weights consistently, the P-SOM generates a predictive reference vectors and quantizes the migratory signals adaptively. Simulation experiments on the bit-error rates (BERs) demonstrate that a P-SOM adaptive demodulator has a superior capability to track phase rotations caused by the Doppler effect. A theoretical noise analysis is also reported for the conventional SOM and the P-SOM. It is found that the calculation results are approximately in good agreement with the experimental ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel nonlinear optimization techniques for training neural networks

    Page(s): 1460 - 1468
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time collision-free motion planning of a mobile robot using a Neural Dynamics-based approach

    Page(s): 1541 - 1552
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1266 KB) |  | HTML iconHTML  

    A neural dynamics based approach is proposed for real-time motion planning with obstacle avoidance of a mobile robot in a nonstationary environment. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation or an additive equation. The real-time collision-free robot motion is planned through the dynamic neural activity landscape of the neural network without any learning procedures and without any local collision-checking procedures at each step of the robot movement. Therefore the model algorithm is computationally simple. There are only local connections among neurons. The computational complexity linearly depends on the neural network size. The stability of the proposed neural network system is proved by qualitative analysis and a Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust combination of neural networks and hidden Markov models for speech recognition

    Page(s): 1519 - 1531
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (649 KB) |  | HTML iconHTML  

    Acoustic modeling in state-of-the-art speech recognition systems usually relies on hidden Markov models (HMMs) with Gaussian emission densities. HMMs suffer from intrinsic limitations, mainly due to their arbitrary parametric assumption. Artificial neural networks (ANNs) appear to be a promising alternative in this respect, but they historically failed as a general solution to the acoustic modeling problem. This paper introduces algorithms based on a gradient-ascent technique for global training of a hybrid ANN/HMM system, in which the ANN is trained for estimating the emission probabilities of the states of the HMM. The approach is related to the major hybrid systems proposed by Bourlard and Morgan and by Bengio, with the aim of combining their benefits within a unified framework and to overcome their limitations. Several viable solutions to the "divergence problem"-that may arise when training is accomplished over the maximum-likelihood (ML) criterion-are proposed. Experimental results in speaker-independent, continuous speech recognition over Italian digit-strings validate the novel hybrid framework, allowing for improved recognition performance over HMMs with mixtures of Gaussian components, as well as over Bourlard and Morgan's paradigm. In particular, it is shown that the maximum a posteriori (MAP) version of the algorithm yields a 46.34% relative word error rate reduction with respect to standard HMMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A low-complexity fuzzy activation function for artificial neural networks

    Page(s): 1576 - 1579
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (374 KB) |  | HTML iconHTML  

    A novel fuzzy-based activation function for artificial neural networks is proposed. This approach provides easy hardware implementation and straightforward interpretability in the basis of IF-THEN rules. Backpropagation learning with the new activation function also has low computational complexity. Several application examples ( XOR gate, chaotic time-series prediction, channel equalization, and independent component analysis) support the potential of the proposed scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive blind signal and image processing: learning algorithms and applications [Book Review]

    Page(s): 1580
    Save to Project icon | Request Permissions | PDF file iconPDF (145 KB)  
    Freely Available from IEEE
  • Qualitative analysis and synthesis of recurrent neural networks [Book Review]

    Page(s): 1580 - 1581
    Save to Project icon | Request Permissions | PDF file iconPDF (154 KB)  
    Freely Available from IEEE
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Page(s): 1583 - 1588
    Save to Project icon | Request Permissions | PDF file iconPDF (196 KB)  
    Freely Available from IEEE
  • Subject index

    Page(s): 1588 - 1600
    Save to Project icon | Request Permissions | PDF file iconPDF (244 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope