By Topic

Neural Networks, IEEE Transactions on

Issue 4 • Date July 2000

Filter Results

Displaying Results 1 - 21 of 21
  • Local routing algorithms based on Potts neural networks

    Publication Year: 2000 , Page(s): 970 - 977
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    A feedback neural approach to static communication routing in asymmetric networks is presented, where a mean field formulation of the Bellman-Ford method for the single unicast problem is used as a common platform for developing algorithms for multiple unicast, multicast and multiple multicast problems. The appealing locality and update philosophy of the Bellman-Ford algorithm is inherited. For all problem types the objective is to minimize a total connection cost, defined as the sum of the individual costs of the involved arcs, subject to capacity constraints. The methods are evaluated for synthetic problem instances by comparing to exact solutions for cases where these are accessible, and else with approximate results from simple heuristics. In general, the quality of the results are better than those of the heuristics. Furthermore, the computational demands are modest, even when the distributed nature of the the approach is not exploited numerically. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global exponential stability of recurrent neural networks for solving optimization and related problems

    Publication Year: 2000 , Page(s): 1017 - 1022
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB)  

    Global exponential stability is a desirable property for dynamic systems. The paper studies the global exponential stability of several existing recurrent neural networks for solving linear programming problems, convex programming problems with interval constraints, convex programming problems with nonlinear constraints, and monotone variational inequalities. In contrast to the existing results on global exponential stability, the present results do not require additional conditions on the weight matrices of recurrent neural networks and improve some existing conditions for global exponential stability. Therefore, the stability results in the paper further demonstrate the superior convergence properties of the existing neural networks for optimization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A feedforward bidirectional associative memory

    Publication Year: 2000 , Page(s): 859 - 866
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    In contrast to conventional feedback bidirectional associative memory (BAM) network models, a feedforward BAM network is developed based on a one-shot design algorithm of O(p2(n+m)) computational complexity, where p is the number of prototype pairs and n, m are the dimensions of the input/output bipolar vectors. The feedforward BAM is an n-p-m three-layer network of McCulloch-Pitts neurons with storage capacity 2min{m,n} and guaranteed perfect bidirectional recall. The overall network design procedure is fully scalable in the sense that any number p⩽2min{m,n} of bidirectional associations can be implemented. The prototype patterns may be arbitrarily correlated. With respect to inference performance, it is shown that the Hamming attractive radius of each prototype reaches the maximum possible value. Simulation studies and comparisons illustrate and support these theoretical developments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A unified neural-network-based speaker localization technique

    Publication Year: 2000 , Page(s): 997 - 1002
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    Locating and tracking a speaker in real time using microphone arrays is important in many applications such as hands-free video conferencing, speech processing in large rooms, and acoustic echo cancellation. A speaker can be moving from the far field to the near field of the array, or vice versa. Many neural-network-based localization techniques exist, but they are applicable to either far-field or near-field sources, and are computationally intensive for real-time speaker localization applications because of the wide-band nature of the speech. We propose a unified neural-network-based source localization technique, which is simultaneously applicable to wide-band and narrow-band signal sources that are in the far field or near field of a microphone array. The technique exploits a multilayer perceptron feedforward neural network structure and forms the feature vectors by computing the normalized instantaneous cross-power spectrum samples between adjacent pairs of sensors. Simulation results indicate that our technique is able to locate a source with an absolute error of less than 3.5° at a signal-to-noise ratio of 20 dB and a sampling rate of 8000 Hz at each sensor View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of elliptical basis function parameters by the EM algorithm with application to speaker verification

    Publication Year: 2000 , Page(s): 961 - 969
    Cited by:  Papers (15)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    This paper proposes to incorporate full covariance matrices into the radial basis function (RBF) networks and to use the expectation-maximization (EM) algorithm to estimate the basis function parameters. The resulting networks, referred to as elliptical basis function (EBF) networks, are evaluated through a series of text-independent speaker verification experiments involving 258 speakers from a phonetically balanced, continuous speech corpus (TIMIT). We propose a verification procedure using RBF and EBF networks as speaker models and show that the networks are readily applicable to verifying speakers using LP-derived cepstral coefficients as features. Experimental results show that small EBF networks with basis function parameters estimated by the EM algorithm outperform the large RBF networks trained in the conventional approach. The results also show that the equal error rate achieved by the EBF networks is about two-third of that achieved by the vector quantization-based speaker models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A modified Hopfield auto-associative memory with improved capacity

    Publication Year: 2000 , Page(s): 867 - 878
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    This paper describes a new procedure to implement a recurrent neural network (RNN), based on a new approach to the well-known Hopfield autoassociative memory. In our approach a RNN is seen as a complete graph G and the learning mechanism is also based on Hebb's law, but with a very significant difference: the weights, which control the dynamics of the net, are obtained by coloring the graph G. Once the training is complete, the synaptic matrix of the net will be the weight matrix of the graph. Any one of these matrices will fulfil some spatial properties, for this reason they will be referred to as tetrahedral matrices. The geometrical properties of these tetrahedral matrices may be used for classifying the n-dimensional state-vector space in n classes. In the recall stage, a parameter vector is introduced, which is related with the capacity of the network. It may be shown that the bigger the value of the ith component of the parameter vector is, the lower the capacity of the [i] class of the state-vector space becomes. Once the capacity has been controlled, a new set of parameters that uses the statistical deviation of the prototypes to compare them with those that appear as fixed points is introduced, eliminating thus a great number of parasitic fixed points View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixture of experts for classification of gender, ethnic origin, and pose of human faces

    Publication Year: 2000 , Page(s): 948 - 960
    Cited by:  Papers (51)  |  Patents (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1756 KB)  

    We describe the application of mixtures of experts on gender and ethnic classification of human faces, and pose classification, and show their feasibility on the FERET database of facial images. The mixture of experts is implemented using the “divide and conquer” modularity principle with respect to the granularity and/or the locality of information. The mixture of experts consists of ensembles of radial basis functions (RBFs). Inductive decision trees (DTs) and support vector machines (SVMs) implement the “gating network” components for deciding which of the experts should be used to determine the classification output and to restrict the support of the input space. Both the ensemble of RBF's (ERBF) and SVM use the RBF kernel (“expert”) for gating the inputs. Our experimental results yield an average accuracy rate of 96% on gender classification and 92% on ethnic classification using the ERBF/DT approach from frontal face images, while the SVM yield 100% on pose classification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The analysis of decomposition methods for support vector machines

    Publication Year: 2000 , Page(s): 1003 - 1008
    Cited by:  Papers (43)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    The support vector machine (SVM) is a promising technique for pattern recognition. It requires the solution of a large dense quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, very few methods can handle the memory problem and an important one is the “decomposition method.” However, there is no convergence proof so far. We connect this method to projected gradient methods and provide theoretical proofs for a version of decomposition methods. An extension to bound-constrained formulation of SVM is also provided. We then show that this convergence proof is valid for general decomposition methods if their working set selection meets a simple requirement View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visualization and self-organization of multidimensional data through equalized orthogonal mapping

    Publication Year: 2000 , Page(s): 1031 - 1038
    Cited by:  Papers (4)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    An approach to dimension-reduction mapping of multidimensional pattern data is presented. The motivation for this work is to provide a computationally efficient method for visualizing large bodies of complex multidimensional data as a relatively “topologically correct” lower dimensional approximation. Examples of the use of this approach in obtaining meaningful two-dimensional (2-D) maps and comparisons with those obtained by the self-organizing map (SOM) and the neural-net implementation of Sammon's approach are also presented and discussed. In this method, the mapping equalizes and orthogonalizes the lower dimensional outputs by reducing the covariance matrix of the outputs to the form of a constant times the identity matrix. This new method is computationally efficient and “topologically correct” in interesting and useful ways View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enumeration of linear threshold functions from the lattice of hyperplane intersections

    Publication Year: 2000 , Page(s): 839 - 850
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    We present a method for enumerating linear threshold functions of n-dimensional binary inputs, for neural nets. Our starting point is the geometric lattice Ln of hyperplane intersections in the dual (weight) space. We show how the hyperoctahedral group On+1, the symmetry group of the (n+1)-dimensional hypercube, can be used to construct a symmetry-adapted poset of hyperplane intersections Δ n which is much more compact and tractable than Ln. A generalized Zeta function and its inverse, the generalized Mobius function, are defined on Δn. Symmetry-adapted posets of hyperplane intersections for three-, four-, and five-dimensional inputs are constructed and the number of linear threshold functions is computed from the generalized Mobius function. Finally, we show how equivalence classes of linear threshold functions are enumerated by unfolding the symmetry-adapted poset of hyperplane intersections into a symmetry-adapted face poset. It is hoped that our construction will lead to ways of placing asymptotic bounds on the number of equivalence classes of linear threshold functions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The hysteretic Hopfield neural network

    Publication Year: 2000 , Page(s): 879 - 888
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    A new neuron activation function based on a property found in physical systems-hysteresis-is proposed. We incorporate this neuron activation in a fully connected dynamical system to form the hysteretic Hopfield neural network (HHNN). We then present an analog implementation of this architecture and its associated dynamical equation and energy function. We proceed to prove Lyapunov stability for this new model, and then solve a combinatorial optimization problem (i.e., the N-queen problem) using this network. We demonstrate the advantages of hysteresis by showing increased frequency of convergence to a solution, when the parameters associated with the activation function are varied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic neural-network structure determination for pattern classification

    Publication Year: 2000 , Page(s): 1009 - 1016
    Cited by:  Papers (49)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    Network structure determination is an important issue in pattern classification based on a probabilistic neural network. In this study, a supervised network structure determination algorithm is proposed. The proposed algorithm consists of two parts and runs in an iterative way. The first part identifies an appropriate smoothing parameter using a genetic algorithm, while the second part determines suitable pattern layer neurons using a forward regression orthogonal algorithm. The proposed algorithm is capable of offering a fairly small network structure with satisfactory classification accuracy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of input-output clustering for determining centers of RBFN

    Publication Year: 2000 , Page(s): 851 - 858
    Cited by:  Papers (52)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    The key point in design of radial basis function networks is to specify the number and the locations of the centers. Several heuristic hybrid learning methods, which apply a clustering algorithm for locating the centers and subsequently a linear least-squares method for the linear weights, have been previously suggested. These hybrid methods can be put into two groups, which will be called as input clustering (IC) and input-output clustering (IOC), depending on whether the output vector is also involved in the clustering process. The idea of concatenating the output vector to the input vector in the clustering process has independently been proposed by several papers in the literature although none of them presented a theoretical analysis on such procedures, but rather demonstrated their effectiveness in several applications. The main contribution of this paper is to present an approach for investigating the relationship between clustering process on input-output training samples and the mean squared output error in the context of a radial basis function network (RBFN). We may summarize our investigations in that matter as follows: (1) A weighted mean squared input-output quantization error, which is to be minimized by IOC, yields an upper bound to the mean squared output error. (2) This upper bound and consequently the output error can be made arbitrarily small (zero in the limit case) by decreasing the quantization error which can be accomplished through increasing the number of hidden units View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Temporal updating scheme for probabilistic neural network with application to satellite cloud classification

    Publication Year: 2000 , Page(s): 903 - 920
    Cited by:  Papers (23)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (952 KB)  

    In cloud classification from satellite imagery, temporal change in the images is one of the main factors that causes degradation in the classifier performance. In this paper, a novel temporal updating approach is developed for probabilistic neural network (PNN) classifiers that can be used to track temporal changes in a sequence of images. This is done by utilizing the temporal contextual information and adjusting the PNN to adapt to such changes. Whenever a new set of images arrives, an initial classification is first performed using the PNN updated up to the last frame while at the same time, a prediction using Markov chain models is also made based on the classification results of the previous frame. The results of both the old PNN and the predictor are then compared. Depending on the outcome, either a supervised or an unsupervised updating scheme is used to update the PNN classifier. Maximum likelihood (ML) criterion is adopted in both the training and updating schemes. The proposed scheme is examined on both a simulated data set and the Geostationary Operational Environmental Satellite (GOES) 8 satellite cloud imagery data. These results indicate the improvements in the classification accuracy when the proposed scheme is used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Memory annihilation of structured maps in bidirectional associative memories

    Publication Year: 2000 , Page(s): 1023 - 1030
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    Structured sets comprise Boolean vectors with equal pair-wise Hamming distances, h. An external vector, if it exists at an equidistance of h/2 from each vector of the structured set, is called the centroid of the set. A structured map is a one-one mapping between structured sets. It is a set of associations between Boolean vectors, where both domain and range vectors are drawn from structured sets. Associations between centroids are called centroidal associations. We show that when structured maps are encoded into bidirectional associative memories using outer-product correlation encoding, the memory of these associations are annihilated under certain mild conditions. When annihilation occurs, the centroidal association emerges as a stable association, and we call it an alien attractor. For the special case of maps where h=2, self-annihilation can take place when either the domain or range dimensions are greater than five. In fact, we show that for dimensions greater than eight, as few as three associations suffice for self-annihilation. As an example shows, annihilation occurs even for the case of bipolar decoding which is well known for its improved error correction capability in such associative memory models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion segmentation based on motion/brightness integration and oscillatory correlation

    Publication Year: 2000 , Page(s): 935 - 947
    Cited by:  Papers (16)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (904 KB)  

    A segmentation method based on the integration of motion and brightness is proposed for image sequences. The method is composed of two parallel pathways that process motion and brightness, respectively, Inspired by the visual system, the motion pathway has two stages. The first stage estimates local motion at locations with reliable information. The second stage performs segmentation based on local motion estimates. In the brightness pathway, the input scene is segmented into regions based on brightness distribution. Subsequently, segmentation results from the two pathways are integrated to refine motion estimates. The final segmentation is performed in the motion network based on refined estimates. For segmentation, locally excitatory globally inhibitory oscillator network (LEGION) architecture is employed whereby the oscillators corresponding to a region of similar motion/brightness oscillate in synchrony and different regions attain different phases. Results on synthetic and real image sequences are provided, and comparisons with other methods are made View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized neurofuzzy network modeling algorithms using Bezier-Bernstein polynomial functions and additive decomposition

    Publication Year: 2000 , Page(s): 889 - 902
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB)  

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification of complex shapes using a self organizing neural system

    Publication Year: 2000 , Page(s): 921 - 934
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    We present a multilayer hierarchical neural system for automatic classification of complex contour patterns. The system consists of a neocognitron-like network structure combined with self-organizing maps to automatically determine feature classes. We present results showing that multilayer hierarchical networks are able to tolerate pattern distortion considerably better than standard neural network implementations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Another K-winners-take-all analog neural network

    Publication Year: 2000 , Page(s): 829 - 838
    Cited by:  Papers (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    An analog Hopfield type neural network is given, that identifies the K largest components of a list d of N real numbers. The neurons are identical, with a tanh characteristic, and the weight matrix is symmetric and fully filled. The list to be processed is a summand of the input currents of the neurons, and the network is started from zero. We provide easily computable restrictions on the parameters. The main emphasis here is on the magnitude of the neuronal gain. A complete mathematical analysis is given. The trajectories are shown to eventually have positive components precisely in the positions given by the K largest elements in the input list View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optical neuron by use of a laser diode with injection seeding and external optical feedback

    Publication Year: 2000 , Page(s): 988 - 996
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    We present an all-optical neuron by use of a multimode laser diode that is subjected to external optical feedback and light injection. The shape of the threshold function, that is needed for neural operation, is controlled by adjusting the external feedback level for two longitudinal cavity modes of the laser diode individually. One of the two modes corresponds to the output of the neuron, light injection at the wavelength of this mode provides excitatory input. Light injection in the other mode provides inhibitory input. When light corresponding to two input signals is injected in the same mode, summation of input signals can be achieved. A rate-equation model is used to explain the operating principle theoretically. The proposed injection seeding neuron is built using free-space optics to demonstrate the concept experimentally. The results are in good agreement with the predictions from the rate-equation model. Some experimental results show threshold functions that are preferable from a neural-network point of view. These results agree well with injection locking theory and experiments reported in literature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward a digital neuromorphic pitch extraction system

    Publication Year: 2000 , Page(s): 978 - 987
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    Presents the design of a biologically based signal processing system implemented using standard digital inferior colliculus (IC) technology. The four-stage AM detection system is a step toward a full-pitch detection system and based on known mammalian physiology. The system is operational and has been successfully realized in field programmable grid array technology. Details of the system architecture, its operating principles, and the design decisions necessary to realize successfully neuromorphic systems in digital technology are given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope