# Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop

## Filter Results

Displaying Results 1 - 25 of 64
• ### Tutorial: digital neurocomputing for signal/image processing

Publication Year: 1991, Page(s):616 - 644
Cited by:  Papers (4)
| | PDF (1110 KB)

The requirements on both the computations and storage for neural networks are extremely demanding. Neural information processing would be practical only when efficient and high-speed computing hardware can be made available. The author reviews several approaches to architecture and implementation of neural networks for signal and image processing. The author discusses direct design of dedicated ne... View full abstract»

• ### Segment-based speaker adaptation by neural network

Publication Year: 1991, Page(s):442 - 451
Cited by:  Papers (3)
| | PDF (370 KB)

The authors propose a segment-to-segment speaker adaptation technique using a feed-forward neural network with a time shifted sub-connection architecture. Differences in voice individuality exist in both the spectral and temporal domains. It is generally known that frame based speaker adaptation techniques can not compensate for speaker individuality in the temporal domain. Segment based speaker a... View full abstract»

• ### Neural Networks for Signal Processing. Proceedings of the 1991 IEEE Workshop (Cat. No.91TH0385-5)

Publication Year: 1991
| PDF (27 KB)
• ### Improved structures based on neural networks for image compression

Publication Year: 1991, Page(s):493 - 502
Cited by:  Papers (5)
| | PDF (436 KB)

The problem of efficient image compression through neural networks (NNs) is addressed. Some theoretical results on the application of 2-layer linear NNs to this problem are given. Two more elaborate structures, based on a set of NNs, are further presented; they are shown to be very efficient while remaining computationally rather simple View full abstract»

Publication Year: 1991, Page(s):503 - 512
Cited by:  Papers (11)  |  Patents (1)
| | PDF (436 KB)

The authors introduce a new class of nonlinear filters called neural filters based on the threshold decomposition and neural networks. Neural filters can approximate both linear FIR filters and weighted order statistic (WOS) filters which include median, rank order, and weighted median filters. An adaptive algorithm is derived for determining optimal neural filters under the mean squared error (MS... View full abstract»

• ### A surface reconstruction neural network for absolute orientation problems

Publication Year: 1991, Page(s):513 - 522
Cited by:  Papers (4)
| | PDF (368 KB)

The authors propose a neural network for representation and reconstruction of 2-D curves or 3-D surfaces of complex objects with application to absolute orientation problems of rigid bodies. The surface reconstruction network is trained by a set of roots (the points on the curve or the surface of the object) via forming a very steep cliff between the exterior and interior of the surface, with the ... View full abstract»

• ### Recursive neural networks for signal processing and control

Publication Year: 1991, Page(s):523 - 532
Cited by:  Papers (4)
| | PDF (336 KB)

The authors describe a special type of dynamic neural network called the recursive neural network (RNN). The RNN is a single-input single-output nonlinear dynamical system with a nonrecursive subnet and two recursive subnets arranged in the configuration shown. The authors describe the architecture of the RNN, present a learning algorithm for the network, and provide some examples of its use View full abstract»

• ### A simple word-recognition network with the ability to choose its own decision criteria

Publication Year: 1991, Page(s):452 - 459
Cited by:  Papers (1)
| | PDF (268 KB)

Various reliable algorithms for the word classification problem have been developed. All these models are necessarily based on the classification of certain features' that have to be extracted from the presented word. The general problem in speech recognition is: what kind of features are both word dependent as well as speaker independent? The majority of the existing systems requires a feature s... View full abstract»

• ### A neural architecture for nonlinear adaptive filtering of time series

Publication Year: 1991, Page(s):533 - 542
Cited by:  Papers (2)
| | PDF (452 KB)

A neural architecture for adaptive filtering which incorporates a modularization principle is proposed. It facilitates a sparse parameterization, i.e. fewer parameters have to be estimated in a supervised training procedure. The main idea is to use a preprocessor which determines the dimension of the input space and can be designed independently of the subsequent nonlinearity. Two suggestions for ... View full abstract»

• ### The outlier process [picture processing]

Publication Year: 1991, Page(s):60 - 69
Cited by:  Papers (2)
| | PDF (384 KB)

The authors discuss the problem of detecting outliers from a set of surface data. They start from the Bayes approach and the assumption that surfaces are piecewise smooth and corrupted by a combination of white Gaussian and salt and pepper noise. They show that such surfaces can be modelled by introducing an outlier process that is capable of throwing away' data. They make use of mean field techn... View full abstract»

• ### Fingerprint recognition using neural network

Publication Year: 1991, Page(s):226 - 235
Cited by:  Papers (12)  |  Patents (3)
| | PDF (420 KB)

The authors describe a neural network based approach for automated fingerprint recognition. Minutiae are extracted from the fingerprint image via a multilayer perceptron (MLP) classifier with one hidden layer. The backpropagation learning technique is used for its training. Selected features are represented in a special way such that they are simultaneously invariant under shift, rotation and scal... View full abstract»

• ### Supervised and unsupervised feature extraction from a cochlear model for speech recognition

Publication Year: 1991, Page(s):460 - 469
Cited by:  Papers (2)
| | PDF (468 KB)

The authors explore the application of a novel classification method that combines supervised and unsupervised training, and compare its performance to various more classical methods. The authors first construct a detailed high dimensional representation of the speech signal using Lyon's cochlear model and then optimally reduce its dimensionality. The resulting low dimensional projection retains t... View full abstract»

• ### Concept formation and statistical learning in nonhomogeneous neural nets

Publication Year: 1991, Page(s):30 - 39
| | PDF (276 KB)

The authors present an analysis of complex nonhomogeneous neural nets, an adaptive statistical learning algorithm, and the potential use of these types of systems to perform a general sensor fusion problem. The three main points are the following. First, an extension to the theory of statistical neurodynamics is introduced to include the analysis of complex nonhomogeneous neuron pools consisting o... View full abstract»

• ### Edge detection for optical image metrology using unsupervised neural network learning

Publication Year: 1991, Page(s):188 - 197
Cited by:  Papers (1)  |  Patents (2)
| | PDF (492 KB)

Several unsupervised neural network learning methods are explored and applied to edge detection of microlithography optical images. Lack of a priori knowledge about correct state assignments for learning procedure in optical microlithography environment makes the metrology problem a suitable area for applying unsupervised learning strategies. The methods studied include a self-organizing competiti... View full abstract»

• ### Ordered neural maps and their applications to data compression

Publication Year: 1991, Page(s):543 - 551
Cited by:  Papers (11)
| | PDF (336 KB)

The implicit ordering in scalar quantization is used to substantiate the need for explicit ordering in vector quantization and the ordering of Kohonen's neural net vector quantizer is shown to provide a multidimensional analog to this scalar quantization ordering. Ordered vector quantization, using Kohonen's neural net, was successfully applied to image coding and was then shown to be advantageous... View full abstract»

• ### A mapping approach for designing neural sub-nets

Publication Year: 1991, Page(s):70 - 79
Cited by:  Papers (2)  |  Patents (1)
| | PDF (300 KB)

Several investigators have constructed back-propagation (BP) neural networks by assembling smaller, pre-trained building blocks. This approach leads to faster training and provides a known topology for the network. The authors carry this process down one additional level, by describing methods for mapping given functions to sub-blocks. First, polynomial approximations to the desired function are f... View full abstract»

• ### Neural networks for signal/image processing using the Princeton Engine multi-processor

Publication Year: 1991, Page(s):595 - 605
Cited by:  Papers (4)
| | PDF (528 KB)

The authors describe a modular neural network system for the removal of impulse noise from the composite video signal of television receivers, and the use of the Princeton Engine multi-processor for real-time performance assessment. This system out-performs alternative methods, such as median filters and matched filters. The system uses only eight neurons, and can be economically implemented in VL... View full abstract»

• ### A parallel learning filter system that learns the KL-expansion from examples

Publication Year: 1991, Page(s):121 - 130
| | PDF (384 KB)

A new method for learning in a single-layer linear neural network is investigated. It is based on an optimality criterion that maximizes the information in the outputs and simultaneously concentrates the outputs. The system consists of a number of so-called basic units and it is shown that the stable states of these basic units correspond to the (pure) eigenvectors of the input correlation matrix.... View full abstract»

• ### A comparison of second-order neural networks to transform-based method for translation- and orientation-invariant object recognition

Publication Year: 1991, Page(s):236 - 245
Cited by:  Papers (2)
| | PDF (388 KB)

Neural networks can use second-order neurons to obtain invariance to translations in the input pattern. Alternatively transform methods can be used to obtain translation invariance before classification by a neural network. The authors compare the use of second-order neurons to various translation-invariant transforms. The mapping properties of second-order neurons are compared to those of the gen... View full abstract»

• ### A relaxation neural network model for optimal multi-level image representation by local-parallel computations

Publication Year: 1991, Page(s):473 - 482
| | PDF (828 KB)

A relaxation neural network model is proposed to solve the multi-level image representation problem by energy minimization in local and parallel computations. This network iteratively minimizes the computational energy defined by the local error in neighboring picture elements. This optimization method can generate high quality binary and multi-level images depending on local features, and can be ... View full abstract»

• ### An alternative proof of convergence for Kung-Diamantaras APEX algorithm

Publication Year: 1991, Page(s):40 - 49
Cited by:  Papers (3)
| | PDF (276 KB)

The problem of adaptive principal components extraction (APEX) has gained much interest. In 1990, a new neuro-computation algorithm for this purpose was proposed by S. Y. Kung and K. I. Diamautaras. (see ICASSP 90, p.861-4, vol.2, 1990). An alternative proof is presented to illustrate that the K-D algorithm is in fact richer than has been proved before. The proof shows that the neural network will... View full abstract»

• ### A neural network pre-processor for multi-tone detection and estimation

Publication Year: 1991, Page(s):580 - 588
Cited by:  Papers (3)  |  Patents (1)
| | PDF (296 KB)

A parallel bank of neural networks each trained in a specific band of the spectrum is proposed as a pre-processor for the detection and estimation of multiple sinusoids at low SNRs. A feedforward neural network model in the autoassociative mode, trained using the backpropagation algorithm, is used to construct this sectionized spectrum analyzer. The key concept behind this scheme is that, the netw... View full abstract»

• ### Word recognition based on the combination of a sequential neural network and the GPDM discriminative training algorithm

Publication Year: 1991, Page(s):376 - 384
Cited by:  Papers (1)  |  Patents (1)
| | PDF (280 KB)

The authors propose an isolated-word recognition method based on the combination of a sequential neural network and a discriminative training algorithm using the Generalized Probabilistic Descent Method (GPDM). The sequential neural network deals with the temporal variation of speech by dynamic programming, and the GPDM discriminative training algorithm is used to discriminate easily confused word... View full abstract»

• ### Improving generalization performance in character recognition

Publication Year: 1991, Page(s):198 - 207
Cited by:  Papers (1)
| | PDF (340 KB)

One test of a new training algorithm is how well the algorithm generalizes from the training data to the test data. A new neural net training algorithm termed double backpropagation improves generalization in character recognition by minimizing the change in the output due to small changes in the input. This is accomplished by minimizing the normal energy term found in backpropagation and an addit... View full abstract»

• ### Vector quantization of images using neural networks and simulated annealing

Publication Year: 1991, Page(s):552 - 561
Cited by:  Papers (2)  |  Patents (1)
| | PDF (400 KB)

Vector quantization (VQ) has already been established as a very powerful data compression technique. Specification of the codebook', which contains the best possible collection of codewords', effectively representing the variety of source vectors to be encoded is one of the most critical requirements of VQ systems, and belongs, for most applications, to the class of hard optimization problems. A... View full abstract»