Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Neural Networks, IEEE Transactions on

Issue 4 • Date Jul 1993

Filter Results

Displaying Results 1 - 19 of 19
  • Generalized clustering networks and Kohonen's self-organizing scheme

    Publication Year: 1993 , Page(s): 549 - 557
    Cited by:  Papers (104)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (736 KB)  

    The relationship between the sequential hard c-means (SHCM) and learning vector quantization (LVQ) clustering algorithms is discussed. The impact and interaction of these two families of methods with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method but often lends ideas to clustering algorithms, are considered. A generalization of LVQ that updates all nodes for a given input vector is proposed. The network attempts to find a minimum of a well-defined objective function. The learning rules depend on the degree of distance match to the winner node; the lesser the degree of match with the winner, the greater the impact on nonwinner nodes. Numerical results indicate that the terminal prototypes generated by this modification of LVQ are generally insensitive to initialization and independent of any choice of learning coefficient. IRIS data obtained by E. Anderson's (1939) is used to illustrate the proposed method. Results are compared with the standard LVQ approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collective recall via the brain-state-in-a-box network

    Publication Year: 1993 , Page(s): 580 - 587
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    A number of approaches to pattern recognition employ variants of nearest neighbor recall. This procedure uses a number of prototypes of known class and identifies an unknown pattern vector according to the prototype it is nearest to. A recall criterion of this type that depends on the relation of the unknown to a single prototype is a non-smooth function and leads to a decision boundary that is a jagged, piecewise linear hypersurface. Collective recall, a pattern recognition method based on a smooth nearness measure of the unknown to all the prototypes, is developed. The prototypes are represented as cells in a brain-state-in-a-box (BSB) network. Cells that represent the same pattern class are linked by positive weights and cells representing different pattern classes are linked by negative weights. Computer simulations of collective recall used in conjunction with learning vector quantization (LVQ) show significant improvement in performance relative to nearest neighbor recall for pattern classes defined by nonspherically symmetric Gaussians View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rival penalized competitive learning for clustering analysis, RBF net, and curve detection

    Publication Year: 1993 , Page(s): 636 - 649
    Cited by:  Papers (151)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1104 KB)  

    It is shown that frequency sensitive competitive learning (FSCL), one version of the recently improved competitive learning (CL) algorithms, significantly deteriorates in performance when the number of units is inappropriately selected. An algorithm called rival penalized competitive learning (RPCL) is proposed. In this algorithm, not only is the winner unit modified to adapt to the input for each input, but its rival (the 2nd winner) is delearned by a smaller learning rate. RPCL can be regarded as an unsupervised extension of Kohonen's supervised LVQ2. RPCL has the ability to automatically allocate an appropriate number of units for an input data set. The experimental results show that RPCL outperforms FSCL when used for unsupervised classification, for training a radial basis function (RBF) network, and for curve detection in digital images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Paralleled hardware annealing for optimal solutions on electronic neural networks

    Publication Year: 1993 , Page(s): 588 - 599
    Cited by:  Papers (15)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB)  

    Three basic neural network schemes have been extensively studied by researchers: the iterative networks, the backpropagation networks, and the self-organizing networks. Simulated annealing is a probabilistic hill-climbing technique that accepts, with a nonzero but gradually decreasing probability, deterioration in the cost function of the optimization problems. Hardware annealing, which combines the simulated annealing technique with continuous-time electronic neural networks by changing the voltage gain of neurons, is discussed. The initial and final voltage gains for applying hardware annealing to Hopfield data-conversion networks are presented. In hardware annealing, the voltage gain of output neurons is increased from an initial low value to a final high value in a continuous fashion which helps to achieve the optimal solution for an optimization problem in one annealing cycle. Experimental results on the transfer function and transient response of electronic neural networks achieving the global minimum are also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel multilayer neural networks training algorithm that minimizes the probability of classification error

    Publication Year: 1993 , Page(s): 650 - 659
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    A multilayer neural networks training algorithm that minimizes the probability of classification error is proposed. The claim is made that such an algorithm possesses some clear advantages over the standard backpropagation (BP) algorithm. The convergence analysis of the proposed procedure is performed and convergence of the sequence of criterion realizations with probability of one is proven. An experimental comparison with the BP algorithm on three artificial pattern recognition problems is given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance and fault-tolerance of neural networks for optimization

    Publication Year: 1993 , Page(s): 600 - 614
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1412 KB)  

    The fault-tolerance characteristics of time-continuous, recurrent artificial neural networks (ANNs) that can be used to solve optimization problems are investigated. The performance of these networks is illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to up to 13 simultaneous stuck-at-1 or stuck-at-0 faults for network sizes of up to 900 neurons. The effect of these faults on the performance is demonstrated, and the cause for the observed fault-tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations and the potential benefits of delegating a critical task to a fault-tolerant network are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trial-and-error correlation learning

    Publication Year: 1993 , Page(s): 720 - 722
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB)  

    A new learning architecture is proposed for hardware implementation of neural networks. In this architecture, each synaptic weight is intentionally changed for each trial and then modified in proportion to the trial-and-error correlation between the changes in the weight and the total output error. If the weight changes are small, this learning is almost as good as the backpropagation (BP) learning, without requiring a complex backward network for error backpropagation. If the changes are large, the weights can move in the weight space without being restricted to a relatively small local-minimum. Computer simulation shows that this learning surpasses BP learning in converging to the global minimum when the trial-and-error correlation is defined so as to emphasize the gain (i.e., the decrease in the total output error) rather than the loss View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An optoelectronic implementation of the adaptive resonance neural network

    Publication Year: 1993 , Page(s): 673 - 684
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1020 KB)  

    A solution to the problem of implementation of the adaptive resonance theory (ART) of neural networks that uses an optical correlator which allows the large body of correlator research to be leveraged in the implementation of ART is presented. The implementation takes advantage of the fact that one ART-based architecture, known as ART1, can be broken into several parts, some of which are better to implement in parallel. The control structure of ART, often regarded as its most complex part, is actually not very time consuming and can be done in electronics. The bottom-up and top-down gated pathways, however, are very time consuming to simulate and are difficult to implement directly in electronics due to the high number of interconnections. In addition to the design, the authors present experiments with a laboratory prototype to illustrate its feasibility and to discuss implementation details that arise in practice. This device can potentially outperform alternative implementations of ART1 by as much as two to three orders of magnitude in problems requiring especially large input fields View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Location and stability of the high-gain equilibria of nonlinear neural networks

    Publication Year: 1993 , Page(s): 660 - 672
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1056 KB)  

    The author analyzes the number, location, and stability behavior of the equilibria of arbitrary nonlinear neural networks without resorting to energy arguments based on assumptions of symmetric interactions or no self-interactions. The class of networks studied consists of very general continuous-time continuous-state (CTCS) networks that contain the standard Hopfield network as a special case. The emphasis is on the case where the slopes of the sigmoidal nonlinearities become larger and larger View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • POPART: partial optical implementation of adaptive resonance theory 2

    Publication Year: 1993 , Page(s): 695 - 702
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB)  

    Adaptive resonance architectures are neural nets that are capable of classifying arbitrary input patterns into stable category representations. A hybrid optoelectronic implementation utilizing an optical joint transform correlator is proposed and demonstrated. The resultant optoelectronic system is able to reduce the number of calculations compared to a strictly computer-based approach. The result is that, for larger images, the optoelectronic system is faster than the computer-based approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The capacity of associative memories with malfunctioning neurons

    Publication Year: 1993 , Page(s): 628 - 635
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    Hopfield associative memories with αn malfunctioning neurons are considered. Using some facts from exchangeable events theory, the asymptotic storage capacity of such a network is derived as a function of the parameter α under stability and attractivity requirements. It is shown that the asymptotic storage capacity is (1-α)2n/(4 log n) under stability and (1-α)2(1-2ρ)2n/(4 log n) under attractivity requirements, respectively. Comparing these capacities with their maximum values corresponding to the case when there is no malfunctioning neurons, α=0, shows the robustness of the retrieval mechanism of Hopfield associative memories with respect to the existence of malfunctioning neurons. This result also supports the claim that neural networks are fault tolerant View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A modified bidirectional decoding strategy based on the BAM structure

    Publication Year: 1993 , Page(s): 710 - 717
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB)  

    Based on the B. Kosko's bidirectional associative memories (BAM) strategy, a modified bidirectional decoding strategy (MBDS) is introduced. The MBDS structure provides sufficient recalling capabilities by adding some association fascicles to the BAM structure. These association fascicles are established with a relating parameter that controls the recalling performances. Pattern recognition examples are presented to shown that the MBDS requires considerably fewer weighting connections than the dummy augmentation method does. Moreover, in the problem of pattern recognition, if there are large number of trained pairs, the MBDS is still applicable, but the dummy augmentation method seems to be obstructed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural network adaptive image coding

    Publication Year: 1993 , Page(s): 615 - 627
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1536 KB)  

    An adaptive image-coding system using neural networks is presented. The design of the system is based on the fact that system adaptability is a key to its effectiveness and efficiency. A composite source data model is suggested as a mathematical model for image data. Based on the composite source model, the coding system first classifies image data and then transforms and codes data classes with dedicated schemes. LEP, a reliable learning neural network model that uses experiences and perspectives, is proposed for image data classification using textures. A scheme for learning Karhunen-Loeve (K-L) transform basis arranged in the descent order with respect to eigenvalues in a two-layer linear-forward network is developed. These two learning mechanisms serve as essential parts of the coding system and enhance the adaptability of the system considerably. The experimental results show compressed images of good quality with bit rates as low as 0.1767 bit per pixel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simplified neural network solution through problem decomposition: the case of the truck backer-upper

    Publication Year: 1993 , Page(s): 718 - 720
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB)  

    D.H. Nguyen and B. Widrow (1990) demonstrated that a feedforward neural network can be trained to steer a tractor-trailer truck to a dock while backing up. The feedforward network they used to control the truck contained 25 hidden units and required extensive training. The authors demonstrate that a very simple solution to the truck backer-upper exists and can be found by decomposing the problem into subtasks. By hard-wiring these control laws into a network, they found a controller with only two hidden units that performs as well as the larger controller trained from scratch. This approach could be used to build up more complex controllers from simple components View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analytical comparison of a neural network and a model-based adaptive controller

    Publication Year: 1993 , Page(s): 685 - 694
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB)  

    A neural network inverse dynamics controller with adjustable weights is compared with a computed-torque type adaptive controller. Lyapunov stability techniques, usually applied to adaptive systems, are used to derive a globally asymptotically stable adaptation law for a single-layer neural network controller that bears similarities to the well-known delta rule for neural networks. This alternative learning rule allows the learning rates of each connection weight to be individually adjusted to give faster convergence. The role of persistently exciting inputs in ensuring parameter convergence, often mentioned in the context of adaptive systems, is emphasized in relation to the convergence of neural network weights. A coupled, compound pendulum system is used to develop inverse dynamics controllers based on adaptive and neural network techniques. Adaptation performance is compared for a model-based adaptive controller and a simple neural network utilizing both delta-rule learning and the alternative adaptation law View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mean field annealing using compound Gauss-Markov random fields for edge detection and image estimation

    Publication Year: 1993 , Page(s): 703 - 709
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (824 KB)  

    The authors consider the problem of edge detection and image estimation in nonstationary images corrupted by additive Gaussian noise. The noise-free image is represented using the compound Gauss-Markov random field developed by F.C. Jeng and J.W. Woods (1990), and the problem of image estimation and edge detection is posed as a maximum a posteriori estimation problem. Since the a posteriori probability function is nonconvex, computationally intensive stochastic relaxation algorithms are normally required. A deterministic relaxation method based on mean field annealing with a compound Gauss-Markov random (CGMRF) field model is proposed. The authors present a set of iterative equations for the mean values of the intensity and both horizontal and vertical line processes with or without taking into account some interaction between them. The relationship between this technique and two other methods is considered. Edge detection and image estimation results on several noisy images are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvements

    Publication Year: 1993 , Page(s): 722 - 725
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    The authors develop a mathematical model of the effects of synaptic arithmetic noise in multilayer perceptron training. Predictions are made regarding enhanced fault-tolerance and generalization ability and improved learning trajectory. These predictions are subsequently verified by simulation. The results are perfectly general and have profound implications for the accuracy requirements in multilayer perceptron (MLP) training, particularly in the analog domain View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A clustering technique for digital communications channel equalization using radial basis function networks

    Publication Year: 1993 , Page(s): 570 - 590
    Cited by:  Papers (182)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (768 KB)  

    The application of a radial basis function network to digital communications channel equalization is examined. It is shown that the radial basis function network has an identical structure to the optimal Bayesian symbol-decision equalizer solution and, therefore, can be employed to implement the Bayesian equalizer. The training of a radial basis function network to realize the Bayesian equalization solution can be achieved efficiently using a simple and robust supervised clustering algorithm. During data transmission a decision-directed version of the clustering algorithm enables the radial basis function network to track a slowly time-varying environment. Moreover, the clustering scheme provides an automatic compensation for nonlinear channel and equipment distortion. Computer simulations are included to illustrate the analytical results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • `Neural-gas' network for vector quantization and its application to time-series prediction

    Publication Year: 1993 , Page(s): 558 - 569
    Cited by:  Papers (280)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1016 KB)  

    A neural network algorithm based on a soft-max adaptation rule is presented. This algorithm exhibits good performance in reaching the optimum minimization of a cost function for vector quantization data compression. The soft-max rule employed is an extension of the standard K-means clustering procedure and takes into account a neighborhood ranking of the reference (weight) vectors. It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamics of Brownian particles moving in a potential determined by the data point density. The network is used to represent the attractor of the Mackey-Glass equation and to predict the Mackey-Glass time series, with additional local linear mappings for generating output values. The results obtained for the time-series prediction compare favorably with the results achieved by backpropagation and radial basis function networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope