By Topic

Neural Networks, IEEE Transactions on

Issue 6 • Date Nov. 2000

Filter Results

Displaying Results 1 - 25 of 34
  • Book reviews

    Page(s): 1508 - 1511
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 1512 - 1516
    Save to Project icon | Request Permissions | PDF file iconPDF (52 KB)  
    Freely Available from IEEE
  • Subject index

    Page(s): 1516 - 1529
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • A hybrid linear-neural model for time series forecasting

    Page(s): 1402 - 1412
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB)  

    This paper considers a linear model with time varying parameters controlled by a neural network to analyze and forecast nonlinear time series. We show that this formulation, called neural coefficient smooth transition autoregressive model, is in close relation to the threshold autoregressive model and the smooth transition autoregressive model with the advantage of naturally incorporating linear multivariate thresholds and smooth transitions between regimes. In our proposal, the neural-network output is used to induce a partition of the input space, with smooth and multivariate thresholds. This also allows the choice of good initial values for the training algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synthesis of feedforward networks in supremum error bound

    Page(s): 1213 - 1227
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB)  

    The main result of this paper is a constructive proof of a formula for the upper bound of the approximation error in L (supremum norm) of multidimensional functions by feedforward networks with one hidden layer of sigmoidal units and a linear output. This result is applied to formulate a new method of neural-network synthesis. The result can also be used to estimate complexity of the maximum-error network and/or to initialize that network's weights. An example of the network synthesis is given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User adaptive handwriting recognition by self-growing probabilistic decision-based neural networks

    Page(s): 1373 - 1384
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    Based on self-growing probabilistic decision-based neural networks (SPDNNs), user adaptation of the parameters of SPDNN is formulated as incremental reinforced and anti-reinforced learning procedures, which are easily integrated into the batched training procedures of the SPDNN. In this study, we developed: 1) an SPDNN based handwriting recognition system; 2) a two-stage recognition structure; and 3) a three-phase training methodology for a global coarse classifier (stage 1), a user independent hand written character recognizer (stage 2), and a user adaptation module on a personal computer. With training and testing on a 600-word commonly used Chinese character set, the recognition results indicate that the user adaptation module significantly improved the recognition accuracy. The average recognition rate increased from 44.2% to 82.4% in five adapting cycles, and the performance could finally increase up to 90.2% in ten adapting cycles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An iterative inversion approach to blind source separation

    Page(s): 1423 - 1437
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    We present an iterative inversion (II) approach to blind source separation (BSS). It consists of a quasi-Newton method for the resolution of an estimating equation obtained from the implicit inversion of a robust estimate of the mixing system. The resulting learning rule includes several existing algorithms for BSS as particular cases giving them a novel and unified interpretation. It also provides a justification of the Cardoso and Laheld (1996) step size normalization. The II method is first presented for instantaneous mixtures and then extended to the problem of blind separation of convolutive mixtures. Finally, we derive the necessary and sufficient asymptotic stability conditions for both the instantaneous and convolutive methods to converge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anisotropic noise injection for input variables relevance determination

    Page(s): 1201 - 1212
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    There are two archetypal ways to control the complexity of a flexible regressor: subset selection and ridge regression. In neural-networks jargon, they are, respectively, known as pruning and weight decay. These techniques may also be adapted to estimate which features of the input space are relevant for predicting the output variables. Relevance is given by a binary indicator for subset selection, and by a continuous rating for ridge regression. This paper shows how to achieve such a rating for a multilayer perceptron trained with noise (or jitter). Noise injection (NI) is modified in order to penalize heavily irrelevant features. The proposed algorithm is attractive as it requires the tuning of a single parameter. This parameter controls the complexity of the model (effective number of parameters) together with the rating of feature relevances (effective input space dimension). Bounds on the effective number of parameters support that the stability of this adaptive scheme is enforced by the constraints applied to the admissible set of relevance indices. The good properties of the algorithm are confirmed by satisfactory experimental results on simulated data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comment on "On equilibria, stability, and instability of Hopfield neural networks" [and reply]

    Page(s): 1506 - 1507
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    It is pointed out that the main analysis results about the existence, uniqueness, and global asymptotic stability of the equilibrium of a continuous-time Hopfield type neural network given in the paper by Zhi-Hong Guan et al. (2000) are special cases of relevant ones previously obtained in the literature. In reply the original authors consider the reasoning of Xue-Bin Liang's comments and state that their analysis method is in fact different to existing ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Local PCA algorithms

    Page(s): 1242 - 1250
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    Within the last years various principal component analysis (PCA) algorithms have been proposed. In this paper we use a general framework to describe those PCA algorithms which are based on Hebbian learning. For an important subset of these algorithms, the local algorithms, we fully describe their equilibria, where all lateral connections are set to zero and their local stability. We show how the parameters in the PCA algorithms have to be chosen in order to get an algorithm which converges to a stable equilibrium which provides principal component extraction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximating the maximum weight clique using replicator dynamics

    Page(s): 1228 - 1241
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (a clique) having the largest total weight. This is a generalization of the problem of finding the maximum cardinality clique of an unweighted graph, which is the special case of the MWCP when all vertex weights are equal. The problem is NP-hard for arbitrary graphs, and so is the problem of approximating it within a constant factor. We present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles. It centers around a continuous characterization of the MWCP (a purely combinatorial problem), and lets it be formulated in terms of continuous quadratic programming. One drawback is the presence of spurious solutions, and we present their characterizations. To avoid them we introduce a regularized continuous formulation of the MWCP and show how it completely solves the problem. The formulation naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale. We present theoretical results which guarantee that the solutions provided by our clique finding replicator network are actually those sought. Experimental results confirm the effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Voronoi networks and their probability of misclassification

    Page(s): 1361 - 1372
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    To reduce the memory requirements and the computation cost, many algorithms have been developed that perform nearest neighbor classification using only a small number of representative samples obtained from the training set. We call the classification model underlying all these algorithms as Voronoi networks (Vnets). We analyze the generalization capabilities of these networks by bounding the generalization error. The class of problems that can be solved by Vnets is characterized by the extent to which the set of points on the decision boundaries fill the feature space. We show that Vnets asymptotically converge to the Bayes classifier with arbitrarily high probability provided the number of representative samples grow slower than the square root of the number of training samples and also give the optimal growth rate of the number of representative samples. We redo the analysis for decision tree (DT) classifiers and compare them with Vnets. The bias/variance dilemma and the curse of dimensionality with respect to Vnets and DTs are also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asynchronous self-organizing maps

    Page(s): 1315 - 1322
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    A recently defined energy function which leads to a self-organizing map is used as a foundation for an asynchronous neural-network algorithm. We generalize the existing stochastic gradient approach to an asynchronous parallel stochastic gradient method for generating a topological map on a distributed computer system (MIMD). A convergence proof is presented and simulation results on a set of problems are included. A practical problem using the energy function approach is that a summation over the entire network is required during the computation of updates. Using simulations we demonstrate effective algorithms that use efficient sampling for the approximation of these sums. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergent on-line algorithms for supervised learning in neural networks

    Page(s): 1284 - 1299
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    We define online algorithms for neural network training, based on the construction of multiple copies of the network, which are trained by employing different data blocks. It is shown that suitable training algorithms can be defined, in a way that the disagreement between the different copies of the network is asymptotically reduced, and convergence toward stationary points of the global error function can be guaranteed. Relevant features of the proposed approach are that the learning rate must be not necessarily forced to zero and that real-time learning is permitted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Morphology and autowave metric on CNN applied to bubble-debris classification

    Page(s): 1385 - 1393
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    We present the initial results of cellular neural network (CNN)-based autowave metric to high-speed pattern recognition of gray-scale images. The approach is applied to a problem involving separation of metallic wear debris particles from air bubbles. This problem arises in an optical-based system for determination of mechanical wear. This paper focuses on distinguishing debris particles suspended in the oil flow from air bubbles and using the CNN technology to create an online fault monitoring system. The goal is to develop a classification system with an extremely low false alarm rate for misclassified bubbles. The CNN algorithm detects and classifies single bubbles and bubble groups using binary morphology and autowave metric. The debris particles are separated based on autowave distances computed between bubble models and the unknown objects. Initial experiments indicate that the proposed algorithm is robust and noise tolerant and when implemented on a CNN universal chip it provides a solution in real time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lp approximation of Sigma-Pi neural networks

    Page(s): 1485 - 1489
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    A feedforward Sigma-Pi neural network with a single hidden layer of m neurons is given by mΣj=1cjg(nΠk=1xkkjkj) where cj, θkj, λk∈R. We investigate the approximation of arbitrary functions f: Rn→R by a Sigma-Pi neural network in the Lp norm. An Lp locally integrable function g(t) can approximate any given function, if and only if g(t) can not be written in the form Σj=1nΣk=0mαjk(ln|t|)j-1tk. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A robust neural controller for underwater robot manipulators

    Page(s): 1465 - 1470
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    Presents a robust control scheme using a multilayer neural network with the error backpropagation learning algorithm. The multilayer neural network acts as a compensator of the conventional sliding mode controller to improve the control performance when initial assumptions of uncertainty bounds of system parameters are not valid. The proposed controller is applied to control a robot manipulator operating under the sea which has large uncertainties such as the buoyancy, the drag force, wave effects, currents, and the added mass/moment of inertia. Computer simulation results show that the proposed control scheme gives an effective path way to cope with those unexpected large uncertainties. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Heteroassociations of spatio-temporal sequences with the bidirectional associative memory

    Page(s): 1503 - 1505
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (92 KB)  

    Autoassociations of spatio-temporal sequences have been discussed by a number of authors. We propose a mechanism for storing and retrieving pairs of spatio-temporal sequences with the network architecture of the standard bidirectional associative memory (BAM), thereby achieving heteroassociations of spatio-temporal sequences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural discriminant analysis

    Page(s): 1394 - 1401
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB)  

    The role of bootstrap is highlighted for nonlinear discriminant analysis using a feedforward neural network model. Statistical techniques are formulated in terms of the principle of the likelihood of a neural-network model when the data consist of ungrouped binary responses and a set of predictor variables. We illustrate that the information criterion based on the bootstrap method is shown to be favorable when selecting the optimum number of hidden units for a neural-network model. In order to summarize the measure of goodness-of-fit, the deviance on fitting a neural-network model to binary response data can be bootstrapped. We also provide the bootstrap estimates of the biases of excess error in a prediction rule constructed by fitting to the training sample in the neural network model. We also propose bootstrap methods for the analysis of residuals in order to identify outliers and examine distributional assumptions in neural-network model fitting. These methods are illustrated through the analyzes of medical diagnostic data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind extraction of singularly mixed source signals

    Page(s): 1413 - 1422
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    This paper introduces a novel technique for sequential blind extraction of singularly mixed sources. First, a neural-network model and an adaptive algorithm for single-source blind extraction are introduced. Next, an extractability analysis is presented for singular mixing matrix, and two sets of necessary and sufficient extractability conditions are derived. The adaptive algorithm and neural-network model for sequential blind extraction are then presented. The stability of the algorithm is discussed. Simulation results are presented to illustrate the validity of the adaptive algorithm and the stability analysis. The proposed algorithm is suitable for the case of nonsingular mixing matrix as well as for singular mixing matrix. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization of adaptive neuro-fuzzy inference systems

    Page(s): 1332 - 1346
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    The adaptive network-based fuzzy inference systems (ANFIS) of Jang (1993) is extended to the generalized ANFIS (GANFIS) by proposing a generalized fuzzy model (GFM) and considering a generalized radial basis function (GRBF) network. The GFM encompasses both the Takagi-Sugeno (TS)-model and the compositional rule of inference (CRI) model. The conditions by which the proposed GFM converts to TS-model or the CRI-model are presented. The basis function in GRBF is a generalized Gaussian function of three parameters. The architecture of the GRBF network is devised to learn the parameters of GFM, where the GRBF network and GFM have been proved to be functionally equivalent. It Is shown that GRBF network can be reduced to either the standard RBF or the Hunt's RBF network. The issue of the normalized versus the non-normalized GRBF networks is investigated in the context of GANFIS. An interesting property of symmetry on the error surface of GRBF network is investigated. The proposed GANFIS is applied to the modeling of a multivariable system like stock market. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global stability for cellular neural networks with time delay

    Page(s): 1481 - 1484
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (92 KB)  

    A sufficient condition related to the existence of a unique equilibrium point and its global asymptotic stability for cellular network networks with delay (DCNNs) is derived. It is shown that the condition relies on the feedback matrices and is independent of the delay parameter. Furthermore, this condition is less restrictive than that given in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-line learning of dynamical systems in the presence of model mismatch and disturbances

    Page(s): 1272 - 1283
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    This paper is concerned with the online learning of unknown dynamical systems using a recurrent neural network. The unknown dynamic systems to be learned are subject to disturbances and possibly unstable. The neural-network model used has a simple architecture with one layer of adaptive connection weights. Four learning rules are proposed for the cases where the system state is measurable in continuous or discrete time. Some of these learning rules extend the σ-modification of the standard gradient learning rule. Convergence properties are given to show that the weight parameters of the recurrent neural network are bounded and the state estimation error converges exponentially to a bounded set, which depends on the modeling error and the disturbance bound. The effectiveness of the proposed learning rules for the recurrent neural network is demonstrated using an illustrative example of tracking a Brownian motion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variational Gaussian process classifiers

    Page(s): 1458 - 1464
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    Gaussian processes are a promising nonlinear regression tool, but it is not straightforward to solve classification problems with them. In the paper the variational methods of Jaakkola and Jordan (2000) are applied to Gaussian processes to produce an efficient Bayesian binary classifier. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stable neural controller design for unknown nonlinear systems using backstepping

    Page(s): 1347 - 1360
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    We propose, from an adaptive control perspective, a neural controller for a class of unknown, minimum phase, feedback linearizable nonlinear system with known relative degree. The control scheme is based on the backstepping design technique in conjunction with a linearly parametrized neural-network structure. The resulting controller, however, moves the complex mechanics involved in a typical backstepping design from off-line to online. With appropriate choice of the network size and neural basis functions, the same controller can be trained online to control different nonlinear plants with the same relative degree, with semi-global stability as shown by the simple Lyapunov analysis. Meanwhile, the controller also preserves some of the performance properties of the standard backstepping controllers. Simulation results are shown to demonstrate these properties and to compare the neural controller with a standard backstepping controller. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope