Scheduled System Maintenance
On Tuesday, May 22, IEEE Xplore will undergo scheduled maintenance. Single article sales and account management will be unavailable
from 6:00am–5:00pm ET. There may be intermittent impact on performance from noon–6:00pm ET.
We apologize for the inconvenience.

Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop

Aug. 31 1992-Sept. 2 1992

Filter Results

Displaying Results 1 - 25 of 64
  • An electronic parallel neural CAM for decoding

    Publication Year: 1992, Page(s):581 - 587
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (288 KB)

    The authors report measurements taken on an electronic neural system configured for content addressable memory (CAM) using a high-capacity architecture. It is shown that Boltzmann and mean-field learning networks can be implemented in a parallel, analog VLSI system. This system was used to perform experiments with mean-field CAM. The hardware settles on a stored codeword in about 10 mu s roughly i... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Globally trained neural network architecture for image compression

    Publication Year: 1992, Page(s):289 - 295
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (321 KB)

    The authors discuss the development of a coding system for image transmission based on block-transform coding and vector quantization. Moreover, a classification of the image blocks is performed in the spatial domain. An architecture incorporating both multilayered perceptron and self-organizing feature map neural networks and a block classification is considered to realize the image coding scheme... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral representations for speech recognition by neural networks - A tutorial

    Publication Year: 1992, Page(s):214 - 222
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (301 KB)

    Spectrum-based speech representations are discussed. Spectral representations, in order to be useful for speech recognition, need to be justified from both the computational (analytical) and the perceptual viewpoints. The authors' discussion of spectral representations, therefore, includes both the computational model and the associated measures of similarity that are appropriate for neural networ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Text-independent talker identification system combining connectionist and conventional models

    Publication Year: 1992, Page(s):131 - 138
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (391 KB)

    Several techniques have been used for speaker identification which have different characteristics and capabilities. The respective merits of three different systems respectively employing neural networks, hidden Markov models, and multivariate autoregressive models are compared. A novel text-independent speaker identification system based on the cooperation of these different techniques is present... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Empirical risk optimisation: neural networks and dynamic programming

    Publication Year: 1992, Page(s):121 - 130
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (430 KB)

    The authors propose a novel system for speech recognition which makes a multilayer perceptron and a dynamic programming module cooperate. It is trained through a cost function inspired by learning vector quantization which approximates the empirical average risk of misclassification. All the modules of the system are trained simultaneously through gradient backpropagation; this ensures the optimal... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural Networks for Signal Processing II. Proceedings of the IEEE-SP Workshop (Cat. No.92TH0430-9)

    Publication Year: 1992
    Request permission for commercial reuse | PDF file iconPDF (32 KB)
    Freely Available from IEEE
  • Adaptive segmentation of textured images using linear prediction and neural networks

    Publication Year: 1992, Page(s):401 - 410
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (624 KB)

    An adaptive technique for classifying and segmenting textured images is presented. This technique uses an efficient least squares algorithm for recursive estimation of two-dimensional autoregressive texture models and neural networks for recursive classification of the models. A network with fixed, but space-varying, interconnection weights is used to optimally select a small representative set of... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generalization error estimate for nonlinear systems

    Publication Year: 1992, Page(s):29 - 38
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (376 KB)

    A new estimate (GEN) of the generalization error is presented. The estimator is valid for both incomplete and nonlinear models. An incomplete model is characterized in that it does not model the actual nonlinear relationship perfectly. The GEN estimator has been evaluated by simulating incomplete models of linear and simple neural network systems. Within the linear system GEN is compared to the fi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some new results in nonlinear predictive image coding using neural networks

    Publication Year: 1992, Page(s):411 - 420
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (288 KB)

    The problem of nonlinear predictive image coding with multilayer perceptrons is considered. Some important aspects of coding, including the training of multilayer perceptrons, the adaptive scheme, and the robustness to the channel noise, are discussed in detail. Computer simulation results show that nonlinear predictors have better predictive performances than the linear DPCM. It is shown that the... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning of sinusoidal frequencies by nonlinear constrained Hebbian algorithms

    Publication Year: 1992, Page(s):39 - 48
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (432 KB)

    The authors study certain unsupervised nonlinear Hebbian learning algorithms in the context of sinusoidal frequency estimation. If the nonlinearity is chosen suitably, these algorithm often perform better than linear Hebbian PCA subspace estimation algorithms in colored and impulsive noise. One of the algorithms seems to be able to separate the sinusoids from a noisy mixture input signal. The auth... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A system identification perspective on neural nets

    Publication Year: 1992, Page(s):423 - 435
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (508 KB)

    The authors review some of the basic system identification machinery to reveal connections with neural networks. In particular, they point to the role of regularization in dealing with model structures with many parameters, and show the links to overtraining in neural nets. Some provisional explanations for the success of neural nets are also offered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive query learning for isolated speech recognition

    Publication Year: 1992, Page(s):93 - 102
    Cited by:  Papers (6)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (96 KB)

    The authors propose an interactive query learning approach to isolated speech recognition tasks. The approach starts with training multiple `one-net-one-class' time delay neural networks (TDNNs) based on sequences of LPC vectors. After all TDNNs are trained, initiated from each available LPC training sequence for one specific TDNN (say, class k), an improved network inversion algorithm wi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural feedforward network with a polynomial nonlinearity

    Publication Year: 1992, Page(s):49 - 58
    Cited by:  Patents (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (308 KB)

    A novel neural network based on the Wiener model is proposed. The network is composed of a hidden layer of preprocessing neurons followed by a polynomial nonlinearity and a linear output neuron. The author tries to solve the problem of finding an appropriate preprocessing method by using a modified backpropagation algorithm. It is shown by the use of calculation trees that the proposed approach is... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum mutual information training of a neural predictive-based HMM speech recognition system

    Publication Year: 1992, Page(s):164 - 173
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (328 KB)

    A corrective training scheme based on the maximum mutual information (MMI) criterion is developed for training a neural predictive-based HMM (hidden Markov model) speech recognition system. The performance of the system on speech recognition tasks when trained with this technique is compared to its performance when trained using the maximum likelihood (ML) criterion. Preliminary results obtained i... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast simulator for neural networks on DSPs or FPGAs

    Publication Year: 1992, Page(s):597 - 605
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (380 KB)

    The authors present a description of their achievements and current research on the implementation of a fast digital simulator for artificial neural networks. This simulator is mapped either on a parallel digital signal processor (DSP) or on a set of field programmable gate arrays (FPGAs). Powerful tools have been developed that automatically compile a graphical neural network description into exe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prediction of chaotic time series using recurrent neural networks

    Publication Year: 1992, Page(s):436 - 443
    Cited by:  Papers (3)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (356 KB)

    The authors propose to train and use a recurrent artificial neural network (ANN) to predict a chaotic time series. Instead of training the network with the next sample in the time series as is normally done, a sequence of samples that follows the present sample will be utilized. Dynamical parameters extracted from the time series provide the information to set the length of these training sequence... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive training of feedback neural networks for non-linear filtering

    Publication Year: 1992, Page(s):550 - 559
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (348 KB)

    The authors propose a general framework which encompasses the training of neural networks and the adaptation of filters. It is shown that neural networks can be considered as general nonlinear filters which can be trained adaptively, i.e., which can undergo continual training. A unified view of gradient-based training algorithms for feedback networks is proposed, which gives rise to new algorithms... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive template method for speech recognition

    Publication Year: 1992, Page(s):103 - 110
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (228 KB)

    An adaptive template method for pattern recognition is proposed. The template adaptation algorithm is derived based on minimizing the classification error of the classifier. The authors have applied this method to a multispeaker English E-set recognition experiment and achieved a 90.38% average recognition rate with only one template for each letter. This indicates that the derived templates are a... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization in cascade-correlation networks

    Publication Year: 1992, Page(s):59 - 68
    Cited by:  Papers (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (348 KB)

    Two network construction algorithms are analyzed and compared theoretically as well as empirically. The first algorithm is the cascade correlation learning architecture proposed by S. E. Fahlman (1990), while the other algorithm is a small but striking modification of the former. Fahlman's algorithm builds multilayer feedforward networks with as many layers as the number of added hidden units, whi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern classification with a codebook-excited neural network

    Publication Year: 1992, Page(s):223 - 232
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (396 KB)

    A codebook-excited neural network (CENN) is formed by a multi-layer perceptron excited by a set of code vectors. The authors study its discriminant performance and compare it with other models. The performance improvement with the CENN is demonstrated in a number of cases. The CENN has been developed for classification. The multilayer codebook-excited feedforward neural network enhances the separa... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized feedforward filters with complex poles

    Publication Year: 1992, Page(s):503 - 510
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (292 KB)

    The authors propose an extension to an existing structure, the gamma filter, replacing the real pole on the tap-to-tap transfer function with a pair of complex conjugate poles and a zero. The new structure is, like the gamma filter, an IIR filter with restricted feedback whose stability is trivial to check. While the gamma filter decouples the memory depth from the filter order for low-pass signal... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Training continuous density hidden Markov models in association with self-organizing maps and LVQ

    Publication Year: 1992, Page(s):174 - 183
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (436 KB)

    The authors propose a novel initialization method for continuous observation density hidden Markov models (CDHMMs) that is based on self-organizing maps (SOMs) and learning vector quantization (LVQ). The framework is to transcribe speech into phoneme sequences using CDHMMs as phoneme models. When numerous mixtures of, for example, Gaussian density functions are used to model the observation distri... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real time CCD-based neural network system for pattern recognition applications

    Publication Year: 1992, Page(s):606 - 616
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (588 KB)

    A generic NNC (neural network classifier) capable of providing 1.9 billion programmable connections per second is described. Applications for these generic processors include image and speech recognition as well as sonar signal identification. To demonstrate the modularity and flexibility of the CCD (charge coupled device) NNCs, two generic multilayer system-level boards capable of both feedforwar... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear system identification using multilayer perceptrons with locally recurrent synaptic structure

    Publication Year: 1992, Page(s):444 - 453
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (388 KB)

    It is proved that a multilayer perceptron (MLP) with infinite impulse response (IIR) synapses can represent a class of nonlinear block-oriented systems. This includes the well-known Wiener, Hammerstein, and cascade or sandwich systems. Previous methods used to model these systems such as the Volterra series representation are known to be extremely inefficient, and so the IIR MLP represents an effe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity control in classifiers for pattern recognition

    Publication Year: 1992, Page(s):255 - 266
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (460 KB)

    Achieving good performance in statistical pattern recognition requires matching the capacity of the classifier to the size of the available training set. A classifier with too many adjustable parameters (large capacity) is likely to learn the training set without difficulty, but be unable to generalize properly to new patterns. If the capacity is too small, even the training set might not be learn... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.