By Topic

Neural Networks, IEEE Transactions on

Issue 5 • Date Sep 2000

Filter Results

Displaying Results 1 - 15 of 15
  • Robust backstepping control of induction motors using neural networks

    Page(s): 1178 - 1187
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    We present a new robust control technique for induction motors using neural networks (NNs). The method is systematic and robust to parameter variations. Motivated by the backstepping design technique, we first treat certain signals in the system as fictitious control inputs to a simpler subsystem. A two-layer NN is used in this stage to design the fictitious controller. We then apply a second two-layer NN to robustly realize the fictitious NN signals designed in the previous step. A new tuning scheme is proposed which can guarantee the boundedness of tracking error and weight updates. A main advantage of our method is that it does not require regression matrices, so that no preliminary dynamical analysis is needed. Another salient feature of our NN approach is that the off-line learning phase is not needed. Full state feedback is needed for implementation. Load torque and rotor resistance can be unknown but bounded View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The evidence framework applied to support vector machines

    Page(s): 1162 - 1173
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    We show that training of the support vector machine (SVM) can be interpreted as performing the level 1 inference of MacKay's evidence framework (1992). We further on show that levels 2 and 3 of the evidence framework can also be applied to SVMs. This integration allows automatic adjustment of the regularization parameter and the kernel parameter to their near-optimal values. Moreover, it opens up a wealth of Bayesian tools for use with SVMs. Performance of this method is evaluated on both synthetic and real-world data sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvements to the SMO algorithm for SVM regression

    Page(s): 1188 - 1193
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB)  

    This paper points out an important source of inefficiency in Smola and Scholkopf's (1998) sequential minimal optimization (SMO) algorithm for support vector machine regression that is caused by the use of a single threshold value. Using clues from the Karush-Kuhn-Tucker conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO for regression. These modified algorithms perform significantly faster than the original SMO on the datasets tried View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Equivalence between local exponential stability of the unique equilibrium point and global stability for Hopfield-type neural networks with two neurons

    Page(s): 1194 - 1196
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB)  

    Fang and Kincaid (1996) proposed an open problem about the relationship between the local stability of the unique equilibrium point and the global stability for a Hopfield-type neural network with continuously differentiable and monotonically increasing activation functions. As a partial answer to the problem, in the two-neuron case it is proved that for each given specific interconnection weight matrix, a Hopfield-type neural network has a unique equilibrium point which is also locally exponentially stable for any activation functions and for any other network parameters if and only if the network is globally asymptotically stable for any activation functions and for any other network parameters. If the derivatives of the activation functions of the network are bounded, then the network is globally exponentially stable for any activation functions and for any other network parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient learning algorithm for associative memories

    Page(s): 1058 - 1066
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    Associative memories (AMs) can be implemented using networks with or without feedback. We utilize a two-layer feedforward neural network and propose a learning algorithm that efficiently implements the association rule of a bipolar AM. The hidden layer of the network employs p neurons where p is the number of prototype patterns. In the first layer, the input pattern activates at most one hidden layer neuron or “winner”. In the second layer, the “winner” associates the input pattern to the corresponding prototype pattern. The underlying association principle is minimum Hamming distance and the proposed scheme can be viewed also as an approximately minimum Hamming distance decoder. Theoretical analysis supported by simulations indicates that, in comparison with other suboptimum minimum Hamming distance association schemes, the proposed structure exhibits the following favorable characteristics: 1) it operates in one-shot which implies no convergence-time requirements; 2) it does not require any feedback; and 3) our case studies show that it exhibits superior performance to the popular linear system in a saturated mode. The network also exhibits 4) exponential capacity and 5) easy performance assessment (no asymptotic analysis is necessary). Finally, since it does not require any hidden layer interconnections or tree-search operations, it exhibits low structural as well as operational complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The annealing robust backpropagation (ARBP) learning algorithm

    Page(s): 1067 - 1077
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    Multilayer feedforward neural networks are often referred to as universal approximators. Nevertheless, if the used training data are corrupted by large noise, such as outliers, traditional backpropagation learning schemes may not always come up with acceptable performance. Even though various robust learning algorithms have been proposed in the literature, those approaches still suffer from the initialization problem. In those robust learning algorithms, the so-called M-estimator is employed. For the M-estimation type of learning algorithms, the loss function is used to play the role in discriminating against outliers from the majority by degrading the effects of those outliers in learning. However, the loss function used in those algorithms may not correctly discriminate against those outliers. In the paper, the annealing robust backpropagation learning algorithm (ARBP) that adopts the annealing concept into the robust learning algorithms is proposed to deal with the problem of modeling under the existence of outliers. The proposed algorithm has been employed in various examples. Those results all demonstrated the superiority over other robust learning algorithms independent of outliers. In the paper, not only is the annealing concept adopted into the robust learning algorithms but also the annealing schedule k/t was found experimentally to achieve the best performance among other annealing schedules, where k is a constant and t is the epoch number View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fuzzy clustering neural networks (FCNs) system design methodology

    Page(s): 1174 - 1177
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (96 KB)  

    A system design methodology for fuzzy clustering neural networks (FCNs) is presented. This methodology emphasizes coordination between FCN model definition, architectural description, and systolic implementation. Two mapping strategies both from FCN model to system architecture and from the given architecture to systolic arrays are described. The effectiveness of the methodology is illustrated by: 1) applying the design to an effective FCN model; 2) developing the corresponding parallel architecture with special feedforward and feedback paths; and 3) building the systolic array suitable for VLSI implementation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network for linear matrix inequality problems

    Page(s): 1078 - 1092
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    Gradient-type Hopfield networks have been widely used in optimization problems solving. The paper presents a novel application by developing a matrix oriented gradient approach to solve a class of linear matrix inequalities (LMIs), which are commonly encountered in the robust control system analysis and design. The solution process is parallel and distributed in neural computation. The proposed networks are proven to be stable in the large. Representative LMIs such as generalized Lyapunov matrix inequalities, simultaneous Lyapunov matrix inequalities, and algebraic Riccati matrix inequalities are considered. Several examples are provided to demonstrate the proposed results. To verify the proposed control scheme in real-time applications, a high-speed digital signal processor is used to emulate the neural-net-based control scheme View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A connectionist model for corner detection in binary and gray images

    Page(s): 1124 - 1132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB)  

    For a given binary/gray image, each pixel in the image is assigned with some initial cornerity (our measurable quantity) which is a vector representing the direction and strength of the corner. These cornerities are then mapped onto a neural-network model which is essentially designed as a cooperative computational framework. The cornerity at each pixel is updated depending on the neighborhood information. After the network dynamics settles to stable state, the dominant points are obtained by finding out the local maxima in the cornerities. Theoretical investigations are made to ensure the stability and convergence of the network. It is found that the network is able to detect corner points: even in the noisy images and for open object boundaries. The dynamics of the network is extended to accept the edge information from gray images as well. The effectiveness of the model is experimentally demonstrated in synthetic and real-life binary and gray images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamical behavior of autoassociative memory performing novelty filtering for signal enhancement

    Page(s): 1152 - 1161
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    This paper deals with the dynamical behavior, in probabilistic sense, of a simple perceptron network with sigmoidal output units performing autoassociation for novelty filtering. Networks of retinotopic topology having a one-to-one correspondence between input and output units can be readily trained using the delta learning rule, to perform autoassociative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a “familiar” pattern tends to evoke a null response; but any anomalous component is enhanced. Such a behavior exhibits a promising feature for enhancement of weak signals in additive noise. This paper shows that the probability density function of the weight converges to Gaussian when the input time series is statistically characterized by nonsymmetrical probability density functions. It is shown that the probability density function of the weight satisfies the Fokker-Planck equation. By solving the Fokker-Planck equation, it is found that the weight is Gaussian distributed with time dependent mean and variance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weight adaptation and oscillatory correlation for image segmentation

    Page(s): 1106 - 1123
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2568 KB)  

    We propose a method for image segmentation based on a neural oscillator network. Unlike previous methods, weight adaptation is adopted during segmentation to remove noise and preserve significant discontinuities in an image. Moreover, a logarithmic grouping rule is proposed to facilitate grouping of oscillators representing pixels with coherent properties. We show that weight adaptation plays the roles of noise removal and feature preservation. In particular, our weight adaptation scheme is insensitive to termination time and the resulting dynamic weights in a wide range of iterations lead to the same segmentation results. A computer algorithm derived from oscillatory dynamics is applied to synthetic and real images, and simulation results show that the algorithm yields favorable segmentation results in comparison with other recent algorithms. In addition, the weight adaptation scheme can be directly transformed to a novel feature-preserving smoothing procedure. We also demonstrate that our nonlinear smoothing algorithm achieves good results for various kinds of images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Signal detection using the radial basis function coupled map lattice

    Page(s): 1133 - 1151
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (996 KB)  

    From observation sea clutter, radar echoes from a sea surface, is chaotic rather than random. We propose the use of a spatial temporal predictor to reconstruct the chaotic dynamic of sea clutter because electromagnetic wave scattering is a spatial temporal phenomenon which is physically modeled by partial differential equations. The spatial temporal predictor used here is called radial basis function coupled map lattice (RBF-CML) which uses linear combination to fuse either measurements in different spatial domains for an RBF prediction or predictions from several RBF nets operated on different spatial regions. Using real-life radar data, it is shown that the RBF-CML is an effective method to reconstruct the sea clutter dynamic. The RBF-CML predictor is then applied to detect small targets in sea clutter using the constant false alarm rate (CFAR) principle. The spatial temporal approach is shown, both theoretically and experimentally, to be superior to a conventional CFAR detector View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On overfitting, generalization, and randomly expanded training sets

    Page(s): 1050 - 1057
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    An algorithmic procedure is developed for the random expansion of a given training set to combat overfitting and improve the generalization ability of backpropagation trained multilayer perceptrons (MLPs). The training set is K-means clustered and locally most entropic colored Gaussian joint input-output probability density function estimates are formed per cluster. The number of clusters is chosen such that the resulting overall colored Gaussian mixture exhibits minimum differential entropy upon global cross-validated shaping. Numerical studies on real data and synthetic data examples drawn from the literature illustrate and support these theoretical developments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators

    Page(s): 1093 - 1105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (912 KB)  

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural-network methods for boundary value problems with irregular boundaries

    Page(s): 1041 - 1049
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    Partial differential equations (PDEs) with boundary conditions (Dirichlet or Neumann) defined on boundaries with simple geometry have been successfully treated using sigmoidal multilayer perceptrons in previous works. The article deals with the case of complex boundary geometry, where the boundary is determined by a number of points that belong to it and are closely located, so as to offer a reasonable representation. Two networks are employed: a multilayer perceptron and a radial basis function network. The later is used to account for the exact satisfaction of the boundary conditions. The method has been successfully tested on two-dimensional and three-dimensional PDEs and has yielded accurate results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope