By Topic

Neural Networks, IEEE Transactions on

Issue 3 • Date May 1998

Filter Results

Displaying Results 1 - 22 of 22
  • Using self-organizing maps to learn geometric hash functions for model-based object recognition

    Page(s): 560 - 570
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    A major problem associated with geometric hashing and methods which have emerged from it is the nonuniform distribution of invariants over the hash space. In this paper, a new approach is proposed based on an elastic hash table. We proceed by distributing the hash bins over the invariants. The key idea is to associate the hash bins with the output nodes of a self-organizing feature map (SOFM) neural network which is trained using the invariants as training examples. In this way, the location of a hash bin in the space of invariants is determined by the weight vector of the node associated with the hash bin. The advantage of the proposed approach is that it is a process that adapts to the invariants through learning. Hence, it makes absolutely no assumptions about the statistical characteristics of the invariants and the geometric hash function is actually computed through learning. Furthermore, SOFM's topology preserving property ensures that the computed geometric hash function should be well behaved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Topology constraint free fuzzy gated neural networks for pattern recognition

    Page(s): 483 - 502
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    A novel topology constraint free neural network architecture using a generalized fuzzy gated neuron model is presented for a pattern recognition task. The main feature is that the network does not require weight adaptation at its input and the weights are initialized directly from the training pattern set. The elimination of the need for iterative weight adaptation schemes facilitates quick network set up times which make the fuzzy gated neural networks very attractive. The performance of the proposed network is found to be functionally equivalent to spatio-temporal feature maps under a mild technical condition. The classification performance of the fuzzy gated neural network is demonstrated on a 12-class synthetic three dimensional (3-D) object data set, real-world eight-class texture data set, and real-world 12 class 3-D object data set. The performance results are compared with the classification accuracies obtained from a spatio-temporal feature map, an adaptive subspace self-organizing map, multilayer feedforward neural networks, radial basis function neural networks, and linear discriminant analysis. Despite the network's ability to accurately classify seen data and adequately generalize validation data, its performance is found to be sensitive to noise perturbations due to fine fragmentation of the feature space. This paper also provides partial solutions to the above robustness issue by proposing certain improvements to various modules of the proposed fuzzy gated neural network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A dynamical system perspective of structural learning with forgetting

    Page(s): 508 - 515
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    Structural learning with forgetting is an established method of using Laplace regularization to generate skeletal artificial neural networks. We develop a continuous dynamical system model of regularization in which the associated regularization parameter is generalized to be a time-varying function. Analytic results are obtained for a Laplace regularizer and a quadratic error surface by solving a different linear system in each region of the weight space. This model also enables a comparison of Laplace and Gaussian regularization. Both of these regularizers have a greater effect in weight space directions which are less important for minimization of a quadratic error function. However, for the Gaussian regularizer, the regularization parameter modifies the associated linear system eigenvalues, in contrast to its function as a control input in the Laplace case. This difference provides additional evidence for the superiority of the Laplace over the Gaussian regularizer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust nonlinear system identification using neural-network models

    Page(s): 407 - 429
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1152 KB)  

    We study the problem of identification for nonlinear systems in the presence of unknown driving noise, using both feedforward multilayer neural network and radial basis function network models. Our objective is to resolve the difficulty associated with the persistency of excitation condition inherent to the standard schemes in the neural identification literature. This difficulty is circumvented here by a novel formulation and by using a new class of identification algorithms recently obtained by Didinsky et al. (1995). We present a class of identifiers which secure a good approximant for the system nonlinearity provided that some global optimization technique is used. Subsequently, we address the same problem under a third, worst case L criterion for an RBF modeling. We present a neural-network version of an H-based identification algorithm from Didinsky et al., and show how it leads to satisfaction of a relevant persistency of excitation condition, and thereby to robust identification of the nonlinearity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparative nonlinear modeling of renal autoregulation in rats: Volterra approach versus artificial neural networks

    Page(s): 430 - 435
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    In this paper, feedforward neural networks with two types of activation functions (sigmoidal and polynomial) are utilized for modeling the nonlinear dynamic relation between renal blood pressure and flow data, and their performance is compared to Volterra models obtained by use of the leading kernel estimation method based on Laguerre expansions. The results for the two types of artificial neural networks and the Volterra models are comparable in terms of normalized mean square error (NMSE) of the respective output prediction for independent testing data. However, the Volterra models obtained via the Laguerre expansion technique achieve this prediction NMSE with approximately half the number of free parameters relative to either neural-network model. However, both approaches are deemed effective in modeling nonlinear dynamic systems and their cooperative use is recommended in general View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deterministic annealing techniques for a discrete-time neural-network updating in a block-sequential mode

    Page(s): 345 - 353
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    A global stability criterion for two constituent parameters of the discrete-time neural network updating in a block-sequential mode is derived, and two deterministic annealing techniques incorporating its stability condition are studied. One technique concerns reducing the decay rate of the membrane potential gradually toward zero; the other relates to increasing the neuron gain gradually toward infinity while updating the neuron states iteratively. It is shown that the deterministic annealing for parallel or partial-parallel updating can be successfully accomplished without falling into sustained oscillations by properly controlling the decay rate of the membrane potential as well as the neuron gain. It is also demonstrated that near optimal solutions are obtained for parallel, partial-parallel, and sequential updating by the suitable selection of the two constituent parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural classifiers using one-time updating

    Page(s): 436 - 447
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    The linear threshold element, or perceptron, is a linear classifier with limited capabilities due to the problems arising when the input pattern set is linearly nonseparable. Assuming that the patterns are presented in a sequential fashion, we derive a theory for the detection of linear nonseparability as soon as it appears in the pattern set. This theory is based on the precise determination of the solution region in the weight spare with the help of a special set of vectors. For this region, called the solution cone, we present a recursive computation procedure which allows immediate detection of nonseparability. The algorithm can be directly cast into a simple neural-network implementation. In this model the synaptic weights are committed. Finally, by combining many such neural models we develop a learning procedure capable of separating convex classes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparative analysis of fuzzy ART and ART-2A network clustering performance

    Page(s): 544 - 559
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1168 KB)  

    Adaptive resonance theory (ART) describes a family of self-organizing neural networks, capable of clustering arbitrary sequences of input patterns into stable recognition codes. Many different types of ART networks have been developed to improve clustering capabilities. We compare clustering performance of different types of ART networks: fuzzy ART, ART 2A with and without complement encoded input patterns, and a Euclidean ART 2A-variation. All types are tested with two- and high-dimensional input patterns in order to illustrate general capabilities and characteristics in different system environments. Based on our simulation results, fuzzy ART seems to be less appropriate whenever input signals are corrupted by addititional noise, while ART 2A-type networks keep stable in all inspected environments. Together with other examined features, ART architectures suited for particular applications can be selected View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supervised texture classification using a probabilistic neural network and constraint satisfaction model

    Page(s): 516 - 522
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    The texture classification problem is projected as a constraint satisfaction problem. The focus is on the use of a probabilistic neural network (PNN) for representing the distribution of feature vectors of each texture class in order to generate a feature-label interaction constraint. This distribution of features for each class is assumed as a Gaussian mixture model. The feature-label interactions and a set of label-label interactions are represented on a constraint satisfaction neural network. A stochastic relaxation strategy is used to obtain an optimal classification of textures in an image. The advantage of this approach is that all classes in an image are determined simultaneously, similar to human perception of textures in an image View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A successive overrelaxation backpropagation algorithm for neural-network training

    Page(s): 381 - 388
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    A variation of the classical backpropagation algorithm for neural network training is proposed, and convergence is established using the perturbation results of Mangasarian and Solodov (1994). The algorithm is similar to the successive overrelaxation (SOR) algorithm for systems of linear equations and linear complementary problems in using the most recently computed values of the weights to update the values on the remaining arcs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multimodality exploration by an unsupervised projection pursuit neural network

    Page(s): 464 - 472
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    Graphical inspection of multimodality is demonstrated using unsupervised lateral-inhibition neural networks. Three projection pursuit indexes are compared on low-dimensional simulated and real-world data: principal components, Legendre polynomial, and projection pursuit network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-organization of spiking neurons using action potential timing

    Page(s): 575 - 578
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    We propose a mechanism for unsupervised learning in networks of spiking neurons which is based on the timing of single firing events. Our results show that a topology preserving behavior quite similar to that of Kohonen's self-organizing map can be achieved using temporal coding. In contrast to previous approaches, which use rate coding, the winner among competing neurons can be determined fast and locally. Our model is a further step toward a more realistic description of unsupervised learning in biological neural systems. Furthermore, it may provide a basis for fast implementations in pulsed VLSI View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of mines and minelike targets using principal component and neural-network methods

    Page(s): 454 - 463
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    Introduces a system for real-time detection and classification of arbitrarily scattered surface-laid mines from multispectral imagery data of a minefield. The system consists of six channels which use various neural-network structures for feature extraction, detection, and classification of targets in six different optical bands ranging from near UV to near IR. A single-layer autoassociative network trained using the recursive least square (RLS) learning rule was employed in each channel to perform feature extraction. Based upon the extracted features, two different neural-network architectures were used and their performance was compared against the standard maximum likelihood (ML) classification scheme. The outputs of the detector/classifier network in all the channels were fused together in a final decision-making system. Two different final decision making schemes using the majority voting and weighted combination based on consensual theory were considered. Simulations were performed on real data for six bands and on several images in order to account for the variations in size, shape, and contrast of the targets and also the signal-to-clutter ratio. The overall results showed the promise of the proposed system for detection and classification of mines and minelike tagets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analytical framework for local feedforward networks

    Page(s): 473 - 482
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB)  

    Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To obtain a better understanding of these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed. These measures incorporate not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal, multilayer perceptron (MLP) networks that employ the backpropagation learning algorithm on the quadratic cost function, we address a familiar misconception that single-hidden-layer sigmoidal networks are inherently nonlocal by demonstrating that given a sufficiently large number of adjustable weights, single-hidden-layer sigmoidal MLPs exist that are arbitrarily local and retain the ability to approximate any continuous function on a compact domain View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast numerical integration of relaxation oscillator networks based on singular limit solutions

    Page(s): 523 - 532
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    Relaxation oscillations exhibiting more than one time scale arise naturally from many physical systems. When relaxation oscillators are coupled in a way that resembles chemical synapses, we propose a fast method to numerically integrate such networks. The numerical technique, called the singular limit method, is derived from analysis of relaxation oscillations in the singular limit. In such a limit, system evolution gives rise to time instants at which fast dynamics take place and intervals between them during which slow dynamics take place. A full description of the method is given for a locally excitatory globally inhibitory oscillator network (LEGION), where fast dynamics, characterized by jumping which leads to dramatic phase shifts, is captured in this method by iterative operation and slow dynamics is entirely solved. The singular limit method is evaluated by computer experiments, and it produces remarkable speedup compared to other methods of integrating these systems. The speedup makes it possible to simulate large-scale oscillator networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advanced neural-network training algorithm with reduced complexity based on Jacobian deficiency

    Page(s): 448 - 453
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    We introduce an advanced supervised training method for neural networks. It is based on Jacobian rank deficiency and it is formulated, in some sense, in the spirit of the Gauss-Newton algorithm. The Levenberg-Marquardt algorithm, as a modified Gauss-Newton, has been used successfully in solving nonlinear least squares problems including neural-network training. It outperforms the basic backpropagation and its variations with variable learning rate significantly, but with higher computation and memory complexities within each iteration. The mew method developed in this paper is aiming at improving convergence properties, while reducing the memory and computation complexities in supervised training of neural networks. Extensive simulation results are provided to demonstrate the superior performance of the new algorithm over the Levenberg-Marquardt algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inductive inference from noisy examples using the hybrid finite state filter

    Page(s): 571 - 575
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    Recurrent neural networks processing symbolic strings can be regarded as adaptive neural parsers. Given a set of positive and negative examples, picked up from a given language, adaptive neural parsers can effectively be trained to infer the language grammar. In this paper we use adaptive neural parsers to face the problem of inferring grammars from examples that are corrupted by a kind of noise that simply changes their membership. We propose a training algorithm, referred to as hybrid finite state filter, which is based on a parsimony principle that penalizes the development of complex rules. We report very promising experimental results showing that the proposed inductive inference scheme is indeed capable of capturing rules, while removing noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characteristics of multidimensional holographic associative memory in retrieval with dynamically localizable attention

    Page(s): 389 - 406
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    This paper presents the performance analysis of multidimensional holographic associative memory (MHAC). MHAC has the unique ability to retrieve pattern-associations with changeable attention. In attention actuated retrieval the user can dynamically select any subset of the elements in the example query pattern and expect the memory to confine its associative match only within the specified field of attention. MHAC, with the unique ability of localizable attention, can retrieve information correctly even with cues as small as 10% of the query frame. This paper investigates the performance of MHAC in attention actuated retrieval both analytically and experimentally. Besides confirmation, the experiments also identify an operational range space for this memory within which various attention based applications can be built with a performance guarantee View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recurrent neural-network training by a learning automaton approach for trajectory learning and control system design

    Page(s): 354 - 368
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    We present a training approach using concepts from the theory of stochastic learning automata that eliminates the need for computation of gradients. This approach also offers the flexibility of tailoring a number of specific training algorithms based on the selection of linear and nonlinear reinforcement rules for updating automaton action probabilities. The training efficiency is demonstrated by application to two complex temporal learning scenarios, viz, learning of time-dependent continuous trajectories and feedback controller designs for continuous dynamical plants. For the first problem, it is shown that training algorithms can be tailored following the present approach for a recurrent neural net to learn to generate a benchmark circular trajectory more accurately than possible with existing gradient-based training procedures. For the second problem, it is shown that recurrent neural-network-based feedback controllers can be trained for different control objectives View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image compression by self-organized Kohonen map

    Page(s): 503 - 507
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (100 KB)  

    Presents a compression scheme for digital still images, by using Kohonen's neural network algorithm, not only for its vector quantization feature, but also for its topological property. This property allows an increase of about 80% for the compression rate. Compared to the JPEG standard, this compression scheme shows better performances (in terms of PSNR) for compression rates higher than 30 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two algorithms for neural-network design and training with application to channel equalization

    Page(s): 533 - 543
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB)  

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A self-organizing neural tree for large-set pattern classification

    Page(s): 369 - 380
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    For the case of classifying large-set and complex patterns, the greater part of conventional neural networks suffer from several difficulties such as the determination of the structure and size of the network, the computational complexity, etc. To cope with these difficulties, we propose a structurally adaptive intelligent neural tree (SAINT). The basic idea is to partition hierarchically the input pattern space using a tree-structured network which is composed of subnetworks with topology-preserving mapping ability. The main advantage of SAINT is that it attempts to find automatically a network structure and size suitable for the classification of large-set and complex patterns through structure adaptation. Experimental results reveal that SAINT is very effective for the classification of large-set real world handwritten characters with high variations, as well as multilingual, multifont, and multisize large-set characters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope