Special Notice
IEEE Xplore is transitioning to HTTPS on 9 April 2018. Customer access via EZproxy will require version 6 or higher with TLS 1.1 or 1.2 enabled.
Review our EZproxy Upgrade Checklist to ensure uninterrupted access.

# Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop

## Filter Results

Displaying Results 1 - 25 of 64
• ### Tutorial: digital neurocomputing for signal/image processing

Publication Year: 1991, Page(s):616 - 644
Cited by:  Papers (4)
| | PDF (1110 KB)

The requirements on both the computations and storage for neural networks are extremely demanding. Neural information processing would be practical only when efficient and high-speed computing hardware can be made available. The author reviews several approaches to architecture and implementation of neural networks for signal and image processing. The author discusses direct design of dedicated ne... View full abstract»

• ### Segment-based speaker adaptation by neural network

Publication Year: 1991, Page(s):442 - 451
Cited by:  Papers (3)
| | PDF (370 KB)

The authors propose a segment-to-segment speaker adaptation technique using a feed-forward neural network with a time shifted sub-connection architecture. Differences in voice individuality exist in both the spectral and temporal domains. It is generally known that frame based speaker adaptation techniques can not compensate for speaker individuality in the temporal domain. Segment based speaker a... View full abstract»

• ### Neural Networks for Signal Processing. Proceedings of the 1991 IEEE Workshop (Cat. No.91TH0385-5)

Publication Year: 1991
| PDF (27 KB)
• ### Nonlinear adaptive filtering of systems with hysteresis by quantized mean field annealing

Publication Year: 1991, Page(s):151 - 160
| | PDF (336 KB)

A technique for nonlinear adaptive filtering of systems with hysteresis has been developed which combines quantized mean field annealing (QMFA) and conventional RLS/FTF adaptive filtering. Hysteresis is modeled as a nonlinear system with memory. Unlike other methods which rely on Volterra and Wiener models, this technique can efficiently handle large order nonlinearities with or without hysteresis... View full abstract»

• ### Note on generalization, regularization and architecture selection in nonlinear learning systems

Publication Year: 1991, Page(s):1 - 10
Cited by:  Papers (66)
| | PDF (408 KB)

The author proposes a new estimate of generalization performance for nonlinear learning systems called the generalized prediction error ( GPE) which is based upon the notion of the effective number of parameters peff(λ). GPE does not require the use of a test set or computationally intensive cross validation and generalizes previously proposed model sel... View full abstract»

• ### Nonlinear resampling transformation for automatic speech recognition

Publication Year: 1991, Page(s):319 - 326
Cited by:  Papers (3)
| | PDF (260 KB)

A new technique for speech signal processing called nonlinear resampling transformation (NRT) is proposed. The representation of a speech pattern derived from this technique has two important features: first, it reduces redundancy; second, it effectively removes the nonlinear variations of speech signals in time. The authors have applied NRT to the TI isolated-word database achieving a 99.66% reco... View full abstract»

• ### A space-perturbance/time-delay neural network for speech recognition

Publication Year: 1991, Page(s):385 - 394
Cited by:  Papers (1)
| | PDF (396 KB)

The authors present a space-perturbance time-delay neural network (SPTDNN), which is a generalization of the time-delay neural network (TDNN) approach. It is shown that by introducing the space-perturbance arrangement, the SPTDNN has the ability to be robust to both temporal and dynamic acoustic variance of speech features, thus, is a potentially component approach to speaker-independent and/or no... View full abstract»

• ### An outer product neural network for extracting principal components from a time series

Publication Year: 1991, Page(s):161 - 170
Cited by:  Papers (4)  |  Patents (1)
| | PDF (376 KB)

An outer product neural network architecture has been developed based on subspace concepts. The network is trained by auto-encoding the input exemplars, and will represent the input signal by k-principal components, k being the number of neurons or processing elements in the network. The network is essentially a single linear layer. The weight matrix columns orthonormalize during training. The out... View full abstract»

• ### A parallel learning filter system that learns the KL-expansion from examples

Publication Year: 1991, Page(s):121 - 130
| | PDF (384 KB)

A new method for learning in a single-layer linear neural network is investigated. It is based on an optimality criterion that maximizes the information in the outputs and simultaneously concentrates the outputs. The system consists of a number of so-called basic units and it is shown that the stable states of these basic units correspond to the (pure) eigenvectors of the input correlation matrix.... View full abstract»

• ### Design of a digital VLSI neuroprocessor for signal and image processing

Publication Year: 1991, Page(s):606 - 615
| | PDF (336 KB)

An efficient processing element for data/image processing has been designed. Detailed communication networks, instruction sets and circuit blocks are created for ring-connected and mesh-connected systolic arrays for the retrieving and learning phases of the neural network operations. 800 processing elements can be implemented in 3.75 cm×3.75 cm chip by using the 0.5 μm CMOS technology fro... View full abstract»

• ### A critical overview of neural network pattern classifiers

Publication Year: 1991, Page(s):266 - 275
Cited by:  Papers (32)  |  Patents (2)
| | PDF (444 KB)

A taxonomy of neural network pattern classifiers is presented which includes four major groupings. Global discriminant classifiers use sigmoid or polynomial computing elements that have high' nonzero outputs over most of their input space. Local discriminant classifiers use Gaussian or other localized computing elements that have high' nonzero outputs over only a small localized region of their ... View full abstract»

• ### Discriminative multi-layer feed-forward networks

Publication Year: 1991, Page(s):11 - 20
Cited by:  Papers (20)
| | PDF (408 KB)

The authors propose a new family of multi-layer, feed-forward network (FFN) architectures. This framework allows examination of several feed-forward networks, including the well-known multi-layer perceptron (MLP) network, the likelihood network (LNET) and the distance network (DNET), in a unified manner. They then introduce a novel formulation which embeds network parameters into a functional form... View full abstract»

• ### Speech recognition by combining pairwise discriminant time-delay neural networks and predictive LR-parser

Publication Year: 1991, Page(s):327 - 336
Cited by:  Papers (1)
| | PDF (416 KB)

A phoneme recognition method using pairwise discriminant time-delay neural networks (PD-TDNNs) and a continuous speech recognition method using the PD-TDNNs are proposed. It is shown that classification-type neural networks have poor robustness against the difference in speaking rates between training data and testing data. To improve the robustness, the authors developed a phoneme recognition met... View full abstract»

• ### Nonlinear prediction of speech signals using memory neuron networks

Publication Year: 1991, Page(s):395 - 404
Cited by:  Papers (7)  |  Patents (3)
| | PDF (364 KB)

The authors present a feed-forward neural network architecture that can be used for nonlinear autoregressive prediction of multivariate time-series. It uses specialized neurons (called memory neurons) to store past activations of the network in an efficient fashion. The network learns to be a nonlinear predictor of the appropriate order to model temporal waveforms of speech signals. Arrays of such... View full abstract»

• ### Three-dimensional structured networks for matrix equation solving

Publication Year: 1991, Page(s):80 - 89
| | PDF (380 KB)

Structured networks are feedforward neural networks with linear neurons than use special training algorithms. Two three-dimensional (3-D) structured networks are developed for solving linear equations and the Lyapunov equation. The basic idea of the structured network approaches is: first, represent a given equation-solving problem by a 3-D structured network so that if the network matches a desir... View full abstract»

• ### A simple word-recognition network with the ability to choose its own decision criteria

Publication Year: 1991, Page(s):452 - 459
Cited by:  Papers (1)
| | PDF (268 KB)

Various reliable algorithms for the word classification problem have been developed. All these models are necessarily based on the classification of certain features' that have to be extracted from the presented word. The general problem in speech recognition is: what kind of features are both word dependent as well as speaker independent? The majority of the existing systems requires a feature s... View full abstract»

• ### Pattern recognition properties of neural networks

Publication Year: 1991, Page(s):173 - 187
Cited by:  Papers (15)
| | PDF (640 KB)

Artificial neural networks have been applied largely to solving pattern recognition problems. The authors point out that a firm understanding of the statistical properties of neural nets is important for using them in an effective manner for pattern recognition problems. The author gives an overview of pattern recognition properties for feedforward neural nets, with emphasis on two topics: partiti... View full abstract»

• ### Ordered neural maps and their applications to data compression

Publication Year: 1991, Page(s):543 - 551
Cited by:  Papers (11)
| | PDF (336 KB)

The implicit ordering in scalar quantization is used to substantiate the need for explicit ordering in vector quantization and the ordering of Kohonen's neural net vector quantizer is shown to provide a multidimensional analog to this scalar quantization ordering. Ordered vector quantization, using Kohonen's neural net, was successfully applied to image coding and was then shown to be advantageous... View full abstract»

• ### Restricted learning algorithm and its application to neural network training

Publication Year: 1991, Page(s):131 - 140
Cited by:  Patents (3)
| | PDF (304 KB)

The authors propose a new (semi)-optimization algorithm, called the restricted learning algorithm, for a nonnegative evaluating function which is 2 times continuously differentiable on a compact set Ω in RN. The restricted learning algorithm utilizes the maximal excluding regions which are newly derived, and is shown to converge to the global ∈-optimum in Ω. A ... View full abstract»

• ### New discriminative training algorithms based on the generalized probabilistic descent method

Publication Year: 1991, Page(s):299 - 308
Cited by:  Papers (97)  |  Patents (2)
| | PDF (416 KB)

The authors developed a generalized probabilistic descent (GPD) method by extending the classical theory on adaptive training by Amari (1967). Their generalization makes it possible to treat dynamic patterns (of a variable duration or dimension) such as speech as well as static patterns (of a fixed duration or dimension), for pattern classification problems. The key ideas of GPD formulations inclu... View full abstract»

• ### Fuzzy tracking of multiple objects

Publication Year: 1991, Page(s):589 - 592
Cited by:  Papers (1)
| | PDF (160 KB)

The authors have applied a previously developed MLANS neural network to the problem of tracking multiple objects in heavy clutter. In their approach the MLANS performs a fuzzy classification of all objects in multiple frames in multiple classes of tracks and random clutter. This novel approach to tracking using an optimal classification algorithm results in a dramatic improvement of performance: t... View full abstract»

• ### The outlier process [picture processing]

Publication Year: 1991, Page(s):60 - 69
Cited by:  Papers (2)
| | PDF (384 KB)

The authors discuss the problem of detecting outliers from a set of surface data. They start from the Bayes approach and the assumption that surfaces are piecewise smooth and corrupted by a combination of white Gaussian and salt and pepper noise. They show that such surfaces can be modelled by introducing an outlier process that is capable of throwing away' data. They make use of mean field techn... View full abstract»

• ### Recursive neural networks for signal processing and control

Publication Year: 1991, Page(s):523 - 532
Cited by:  Papers (4)
| | PDF (336 KB)

The authors describe a special type of dynamic neural network called the recursive neural network (RNN). The RNN is a single-input single-output nonlinear dynamical system with a nonrecursive subnet and two recursive subnets arranged in the configuration shown. The authors describe the architecture of the RNN, present a learning algorithm for the network, and provide some examples of its use View full abstract»

• ### Workstation-based phonetic typewriter

Publication Year: 1991, Page(s):279 - 288
| | PDF (460 KB)

The author presents a general description of his `phonetic typewriter' system that transcribes unlimited speech into orthographically correct text. The purpose of this paper is to motivate certain choices made in the partitioning of the problem into tasks and describe their implementation. The combination of algorithms he has selected has proven effective for well-articulated dictation in a phonem... View full abstract»

• ### An alternative proof of convergence for Kung-Diamantaras APEX algorithm

Publication Year: 1991, Page(s):40 - 49
Cited by:  Papers (3)
| | PDF (276 KB)

The problem of adaptive principal components extraction (APEX) has gained much interest. In 1990, a new neuro-computation algorithm for this purpose was proposed by S. Y. Kung and K. I. Diamautaras. (see ICASSP 90, p.861-4, vol.2, 1990). An alternative proof is presented to illustrate that the K-D algorithm is in fact richer than has been proved before. The proof shows that the neural network will... View full abstract»