Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan)

25-29 Oct. 1993

Go

Filter Results

Displaying Results 1 - 25 of 242
  • A new approach to storing temporal sequences

    Publication Year: 1993, Page(s):2745 - 2748 vol.3
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (264 KB)

    From a neurophysiological point of view, to generate a temporal sequence is equivalent to correctly synchronize in time, patterns which are already known by the brain (e.g. the phonemes which compose the words). In this paper, we propose an architecture for representing the patterns based on a two-dimensional map, where the processing elements are micro-columns composed by three elements in cascad... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy learning vector quantization

    Publication Year: 1993, Page(s):2739 - 2743 vol.3
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (316 KB)

    In this paper, a new supervised competitive learning network model called fuzzy learning vector quantization (FLVQ) which incorporates fuzzy concepts into the learning vector quantization (LVQ) networks is proposed. Unlike the original algorithm, the FLVQ's learning algorithm is derived from optimizing an appropriate fuzzy objective function which takes into accounts of two goals, namely, minimizi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-Hermitian associative memories spontaneously generating dynamical attractors

    Publication Year: 1993, Page(s):2567 - 2570 vol.3
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (252 KB)

    Experimental results on complex-valued associative memories are reported. When the memory has some dynamical attractors, the weighting matrix should generally be non-Hermitian. The non-Hermitian associative memories have an advantage that they can express trajectories inherently smoother than conventional real-number memories. In this paper, spontaneous variations of simple stationary attractors i... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Target recognition based on radial basis function network

    Publication Year: 1993, Page(s):2735 - 2738 vol.3
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (284 KB)

    A method of radar target recognition by range profiles is developed, based on the radial basis function network (RBFN). The problem of producing suitable patterns for recognition is discussed. Then a heuristic clustering algorithm for training RBFN is proposed. It is shown, from theoretical analysis and experimental results of rotating platform imaging based on experimental data acquired in a micr... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization of the maximum capacity of recurrent neural networks

    Publication Year: 1993, Page(s):2563 - 2566 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (224 KB)

    The authors have previously proposed a novel model which presents the maximum capacity of 1-layer recurrent neural networks by using an initiator, A, to construct the weight matrix and threshold and to define an equation, which produces all memorized vectors. In this paper, the authors generalize that model by lifting the restriction of A and give the new version of their model. Besides the explan... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Training strategies for weightless neural networks

    Publication Year: 1993, Page(s):2731 - 2734 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (276 KB)

    Weightless neural networks (WNN) are implemented as random access memories. Training WNN requires only global error signals. WNN simulations can learn significantly faster than learning by error-backpropagation. The aim of this paper is to discuss different training strategies for WNN. One new strategy is suggested. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some properties of an associative memory model using the Boltzmann machine learning

    Publication Year: 1993, Page(s):2662 - 2665 vol.3
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (308 KB)

    In this paper, Boltzmann machine learning is applied to an associative memory model. Boltzmann machine learning is superior to both correlation learning and orthogonal learning. It is not necessary to execute this learning procedure strictly for this model. The authors examine some properties of this learning method and the associative memory model using it and try to increase the units of the net... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study on feature extraction using a fuzzy net for off-line signature recognition

    Publication Year: 1993, Page(s):2857 - 2860 vol.3
    Cited by:  Papers (3)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (248 KB)

    This paper presents a method of off-line signature recognition using feature strokes and a fuzzy net. Each stroke has features of signatures, and the fuzzy net proposed by the authors can extract personal characteristics from the strokes. An experiment is done to show the feasibility of the new method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A genetic algorithm for training recurrent neural networks

    Publication Year: 1993, Page(s):2706 - 2709 vol.3
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (320 KB)

    A hybrid genetic algorithm is proposed far training neural networks with recurrent connections. A fully connected recurrent ANN model is employed and tested over a number of problems. Simulation results are presented for three problems: generation of a stable limit cycle, sequence recognition and storage and reproduction of temporal sequences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robustness to noise of associative memory using non-monotonic analogue neurons

    Publication Year: 1993, Page(s):2559 - 2562 vol.3
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (260 KB)

    In this paper, the dependence of the memory capacity of an analogue associative memory model using non-monotonic neurons on static synaptic noise and static threshold noise is shown. This dependence was calculated analytically by means of the self-consistent signal-to-noise analysis (SCSNA). If the noise is extremely large, a higher monotonicity produces a larger memory capacity. At moderate noise... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance aspects of a novel neuron activation function in multi-layer feed-forward networks

    Publication Year: 1993, Page(s):2727 - 2730 vol.3
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (320 KB)

    Conventional single layer networks are limited by their inability to solve nonlinear classification problems. A modified neuron activation function has, recently, been proposed to extend the classification capabilities of single layer networks to cover some nonlinear problems. This paper shows that the classification capabilities of a multilayer network can also be improved by incorporation of the... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic neural networks and the weighted Hebb rule

    Publication Year: 1993, Page(s):2658 - 2661 vol.3
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (200 KB)

    Neural networks with synaptic connections based on the weighted Hebb rule are studied in the presence of noise, in the limit when the size of the network is very large. The presence of a sufficient amount of noise, measured by a critical temperature Tc, results in the elimination of spurious local minima. It is shown that the inclusion of even a single pattern weighted sufficiently more... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid fuzzy ellipsoidal learning

    Publication Year: 1993, Page(s):2853 - 2856 vol.3
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (248 KB)

    Desribes a hybrid system which combines supervised and unsupervised learning to find and tune the fuzzy-rule ellipsoids. Supervised learning tunes the ellipsoids to improve the approximation. Unsupervised competitive learning finds the statistics of data clusters. The covariance matrix of each synaptic quantization vector defines an ellipsoid centered at the quantizing vector or centroid of the da... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning and structuring of neural networks using genetic algorithm and linear programming

    Publication Year: 1993, Page(s):2702 - 2705 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (268 KB)

    Proposes a method for generating neural networks using genetic algorithms and linear programming; the genetic algorithm is used to decide the structure of neural net, while the linear programming is used for learning. This method holds a feature that there is no parameter sensitive to the speed and the precision of learning unlike the usual backpropagation. The effectiveness of the method is shown... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discrete parallel-sequential update of neural networks with adapting synapses

    Publication Year: 1993, Page(s):2371 - 2374 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (156 KB)

    We determine a parallel-sequential iteration starting from a configuration (x(0), A(0)) where A(0) is an n×n symmetric matrix and x(0) the activity vector of neurons. We prove that the dynamics is driven by a Lyapunov functional. Furthermore, some particular cases are analysed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A self-organizing supervised classifier

    Publication Year: 1993, Page(s):2484 - 2487 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (212 KB)

    A new supervised neural network classifier for online learning is introduced. An association of prototype neurons and fuzzy membership function (MF) is used for cluster approximation. The new architecture based on adaptive resonance theory (ART) dedicates one adapted ART module (ARTMOD) to each class of patterns. Each prototype neuron defines a hyper-sphere in the input space. A class consists of ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-Hopfield neuron network for visual texture boundary extraction

    Publication Year: 1993, Page(s):2207 - 2208 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (104 KB)

    Non-Hopfield approach to neuron network (NN) development is proposed based on results of the peripheral visual system neuron structure analysis. A 4-layered NN is described for real scene image processing and texture boundary extraction. The proposed NN corresponds to the lateral geniculate nucleus (LGN) level of a cat visual system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-consistent signal-to-noise analysis of analog neural networks with nonmonotonic transfer functions and enhancement of the storage capacity

    Publication Year: 1993, Page(s):2555 - 2558 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (264 KB)

    Analog neural networks of associative memory with nonmonotonic transfer functions are studied using the self-consistent signal-to-noise analysis. It is assumed that the networks are governed by continuous time dynamics and the synaptic couplings are formed by the Hebb learning rule with unbiased random patterns. The networks of nonmonotonic neurons are shown to exhibit remarkable properties leadin... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy pocket algorithm: a generalized pocket algorithm for classification of fuzzy inputs

    Publication Year: 1993, Page(s):2873 - 2876 vol.3
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (240 KB)

    Perceptron algorithm has been widely adopted in pattern recognition to decide linear decision boundaries. Pocket algorithm, a perceptron-based algorithm, works well with nonseparable or even contradictory training instances. In this paper, a generalized pocket algorithm, called fuzzy pocket algorithm, that is capable of handling inputs in linguistics terms is proposed. Linguistic terms are represe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A modified neuron activation function which enables single layer perceptrons to solve some linearly inseparable problems

    Publication Year: 1993, Page(s):2723 - 2726 vol.3
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (256 KB)

    It is well known that the representational ability of early neural network paradigms, notably, perception Adaline and Madaline, is limited to only linearly separable classification problems. This has been well documented in Minsky and Papert's book (1969). In this paper, a modified neuron activation function is proposed to extend the classification capability of individual neurons to cover a limit... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hybrid neural network for principal component analysis

    Publication Year: 1993, Page(s):2500 - 2503 vol.3
    Cited by:  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (284 KB)

    Neural network models performing principal component analysis have been considered. First we discuss the convergence of Sanger's heuristically developed two-layered neural network (1989) based on "generalized Hebbian algorithm". Then we propose a three-layered hybrid network model in which "generalized Hebbian algorithm" is used as the learning rule for the weights between input and hidden layers ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stable dynamic backpropagation using constrained learning rate algorithm

    Publication Year: 1993, Page(s):2654 - 2657 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (232 KB)

    An equilibrium point learning problem in discrete-time dynamic neural networks is studied in this paper using stable dynamic propagation with constrained learning rate algorithm. The new learning scheme provides an adaptive updating process of the synaptic weights of the network, so that the target pattern is stored at a stable equilibrium point. The applicability of the approach presented is illu... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving the back propagation learning speed with adaptive neuro-fuzzy technique

    Publication Year: 1993, Page(s):2897 - 2900 vol.3
    Cited by:  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (232 KB)

    A neuro-fuzzy technique is presented to improve the standard back propagation learning speed. By adjusting both the learning rate and accelerator parameters based on the system error and change of the error direction, the convergent rate of the proposed technique is found to be superior to that yielded by the conventional approach. Simulation results are given to demonstrate the applicability and ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A learning method for solving inverse problems of static systems

    Publication Year: 1993, Page(s):2843 - 2851 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (432 KB)

    The problem of computing the input value realizing the desired output value of the target system is called the inverse problem. The method that uses an acquired inverse model of the target system by learning is popular. However, acquisition of the inverse model has a number of drawbacks. In this paper, a generalized inverse model with output feedback using the learned inverse model of the lineariz... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Differential equations accompanying neural networks and solvable nonlinear learning machines

    Publication Year: 1993, Page(s):2698 - 2701 vol.3
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (268 KB)

    Solvable models of nonlinear learning machines are analyzed based on the theory of ordinary differential equations. It is shown that a function approximation neural network automatically extracts an accompanying differential equation from learning samples and that optimal parameters can be found without recursion procedures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.