By Topic

Signal Processing, IEEE Transactions on

Issue 8 • Date Aug 2002

Filter Results

Displaying Results 1 - 25 of 27
  • Pilot-based estimation of time-varying multipath channels for coherent CDMA receivers

    Page(s): 2037 - 2049
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (515 KB) |  | HTML iconHTML  

    Reliable coherent wireless communication requires accurate estimation of the time-varying multipath channel. This paper addresses two issues in the context of direct-sequence code-division multiple access (CDMA) systems: (i) linear minimum-mean-squared-error (MMSE) channel estimation based on a pilot transmission and (ii) impact of channel estimation errors on coherent receiver performance. A simple characterization of the MMSE estimator in terms of a bank of filters is derived. A key channel characteristic controlling system performance is the normalized coherence time, which is approximately the number of symbols over which the channel remains strongly correlated. It is shown that the estimator performance is characterized by an effective signal-to-noise ratio (SNR)-the product of the pilot SNR and the normalized coherence time. A simple uniform averaging estimator is also proposed that is easy to implement and delivers near-optimal performance if properly designed. The receivers analyzed in this paper are based on a time-frequency RAKE structure that exploits joint multipath-Doppler diversity. It is shown that the overall receiver performance is controlled by two competing effects: shorter coherence times lead to degraded channel estimation but improved inherent receiver performance due to Doppler diversity, with opposite effects for longer coherence times. Our results demonstrate that exploiting Doppler diversity can significantly mitigate the error probability floors that plague conventional CDMA receivers under fast fading due to errors in channel estimation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automating the modeling and optimization of the performance of signal transforms

    Page(s): 2003 - 2014
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    Fast implementations of discrete signal transforms, such as the discrete Fourier transform (DFT), the Walsh-Hadamard transform (WHT), and the discrete trigonometric transforms (DTTs), can be viewed as factorizations of their corresponding transformation matrices. A given signal transform can have many different factorizations, with each factorization represented by a unique but mathematically equivalent formula. When implemented in code, these formulas can have significantly different running times on the same processor, sometimes differing by an order of magnitude. Further, the optimal implementations on various processors are often different. Given this complexity, a crucial problem is automating the modeling and optimization of the performance of signal transform implementations. To enable computer modeling of signal processing performance, we have developed and analyzed more than 15 feature sets to describe formulas representing specific transforms. Using some of these features and a limited set of training data, we have successfully trained neural networks to learn to accurately predict performance of formulas with error rates less than 5%. In the direction of optimization, we have developed a new stochastic evolutionary algorithm known as STEER that finds fast implementations of a variety of signal transforms. STEER is able to optimize completely new transforms specified by a user. We present results that show that STEER can find discrete cosine transform formulas that are 10-20% faster than what a dynamic programming search finds View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotic performance of subspace methods for synchronous multirate CDMA systems

    Page(s): 2015 - 2026
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    Two direct sequence (DS) code division multiple access (CDMA) multirate access methods with a fixed chip rate can be employed to support multirate services: multicode (MC) access and multiple processing gain (MPG) access. In either an MC or an MPG multirate CDMA system, severe intersymbol interference (ISI) may exist due to large channel delay spread relative to the symbol interval. We generalize the blind subspace technique to such a multirate CDMA system in order to estimate possibly long channel impulse response of a desired user. Then, we build a blind minimum mean-square-error (MMSE) detector to detect the user's information. The detection performance can be significantly improved by suppressing ISI, which becomes feasible after the user's channel parameters are estimated. The asymptotic performance of the channel estimator and the detector is analyzed. In particular, for typical distributions of the inputs and channel noise, closed-form expressions for the channel estimation error and the output signal-to-interference-plus-noise ratio (SINR) of the detector are derived as functions of the number of received data samples and system parameters. Simulation results are provided to verify our analysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scale-invariant nonlinear digital filters

    Page(s): 1986 - 1993
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    Many signal processing applications involve procedures with simple, known dependences on positive rescalings of the input data; examples include correlation and spectral analysis, quadratic time-frequency distributions, and coherence analysis. Often, system performance can be improved with pre- and/or post-processing procedures, and one of the advantages of linear procedures (e.g., smoothing and sharpening filters) is their scale-invariance (xk→yk implies λxk→λyk). There are, however, important cases where linear processing is inadequate, motivating interest in nonlinear digital filters. This paper considers the general problem of designing nonlinear filters that exhibit the following scaling behavior: xk→yk implies λxk→λνyk for some ν>0, with particular emphasis on the case v=1. Results are presented for two general design approaches. The first is the top-down design of these filters, in which a relatively weak structural constraint is imposed (e.g., membership in the nonlinear FIR class), and a complete characterization is sought for all filters satisfying the scaling criterion for some fixed ν. The second approach is the bottom-up design of filters satisfying specified scaling behavior by interconnecting simpler filter structures with known scaling behavior. Examples are presented to illustrate both the simplicity and the utility of these design approaches View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wiener design of adaptation algorithms with time-invariant gains

    Page(s): 1895 - 1907
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (455 KB) |  | HTML iconHTML  

    A design method is presented that extends least mean squared (LMS) adaptation of time-varying parameters by including general linear time-invariant filters that operate on the instantaneous gradient vector. The aim is to track time-varying parameters of linear regression models in situations where the regressors are stationary or have slowly time-varying properties. The adaptation law is optimized with respect to the steady-state parameter error covariance matrix for time-variations modeled as vector-ARIMA processes. The design method systematically uses prior information about time-varying parameters to provide filtering, prediction, or fixed lag smoothing estimates for arbitrary lags. The method is based on a transformation of the adaptation problem into a Wiener filter design problem. The filter works in open loop for slow parameter variations, whereas a time-varying closed loop has to be considered for fast variations. In the latter case, the filter design is performed iteratively. The general form of the solution at each iteration is obtained by a bilateral Diophantine polynomial matrix equation and a spectral factorization. For white gradient noise, the Diophantine equation has a closed-form solution. Further structural constraints result in very simple design equations. Under certain model assumptions, the Wiener designed adaptation laws reduce to LMS adaptation. Compared with Kalman estimators, the channel tracking performance becomes nearly the same in mobile radio applications, whereas the complexity is, in general, much lower View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Bayesian sampling approach to decision fusion using hierarchical models

    Page(s): 1809 - 1818
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB)  

    Data fusion and distributed detection have been studied extensively, and numerous results have been obtained in the literature. In this paper, the design of a fusion rule for distributed detection problems is re-examined, and a novel approach using Bayesian inference tools is proposed. Specifically, the decision fusion problem is reformulated using hierarchical models, and a Gibbs sampler is proposed to perform posterior probability-based fusion. Performance-wise, it is essentially identical to the optimal likelihood-based fusion rule whenever it exists. The true merit of this approach is its applicability to various complex situations, e.g., in dealing with unknown signal/noise statistics where the likelihood-based fusion rule may not be easy to obtain or may not even exist View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cramer-Rao bound for nonlinear filtering with Pd<1 and its application to target tracking

    Page(s): 1916 - 1924
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (367 KB) |  | HTML iconHTML  

    The paper investigates the Cramer-Rao bound for discrete-time nonlinear filtering in the case where the probability of detection of a sensor is less than unity. The theoretical formula involves the evaluation of exponentially growing number of possible miss/detection sequences. An approximation of the theoretical bound for practical applications, such as target tracking, where the number of sensor scans is large, is proposed. An application of the developed techniques to the well-known filtering problem of tracking a re-entry ballistic object is also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum-likelihood source localization and unknown sensor location estimation for wideband signals in the near-field

    Page(s): 1843 - 1854
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (458 KB) |  | HTML iconHTML  

    In this paper, we derive the maximum-likelihood (ML) location estimator for wideband sources in the near field of the sensor array. The ML estimator is optimized in a single step, as opposed to other estimators that are optimized separately in relative time-delay and source location estimations. For the multisource case, we propose and demonstrate an efficient alternating projection procedure based on sequential iterative search on single-source parameters. The proposed algorithm is shown to yield superior performance over other suboptimal techniques, including the wideband MUSIC and the two-step least-squares methods, and is efficient with respect to the derived Cramer-Rao bound (CRB). From the CRB analysis, we find that better source location estimates can be obtained for high-frequency signals than low-frequency signals. In addition, large range estimation error results when the source signal is unknown, but such unknown parameter does not have much impact on angle estimation. In some applications, the locations of some sensors may be unknown and must be estimated. The proposed method is extended to estimate the range from a source to an unknown sensor location. After a number of source-location frames, the location of the uncalibrated sensor can be determined based on a least-squares unknown sensor location estimator View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized sampling: a variational approach .I. Theory

    Page(s): 1965 - 1976
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (407 KB) |  | HTML iconHTML  

    We consider the problem of reconstructing a multidimensional vector function fin: gsim;mgsim;n from a finite set of linear measures. These can be irregularly sampled responses of several linear filters. Traditional approaches reconstruct in an a priori given space, e.g., the space of bandlimited functions. Instead, we have chosen to specify a reconstruction that is optimal in the sense of a quadratic plausibility criterion J. First, we present the solution of the generalized interpolation problem. Later, we also consider the approximation problem, and we show that both lead to the same class of solutions. Imposing generally desirable properties on the reconstruction largely limits the choice of the criterion J. Linearity leads to a quadratic criterion based on bilinear forms. Specifically, we show that the requirements of translation, rotation, and scale-invariance restrict the form of the criterion to essentially a one-parameter family. We show that the solution can be obtained as a linear combination of generating functions. We provide analytical techniques to find these functions and the solution itself View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive instantaneous frequency estimation of multicomponent FM signals using quadratic time-frequency distributions

    Page(s): 1866 - 1876
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (441 KB) |  | HTML iconHTML  

    An adaptive approach to the estimation of the instantaneous frequency (IF) of nonstationary mono- and multicomponent FM signals with additive Gaussian noise is presented. The IF estimation is based on the fact that quadratic time-frequency distributions (TFDs) have maxima around the IF law of the signal. It is shown that the bias and variance of the IF estimate are functions of the lag window length. If there is a bias-variance tradeoff, then the optimal window length for this tradeoff depends on the unknown IF law. Hence, an adaptive algorithm with a time-varying and data-driven window length is needed. The adaptive algorithm can utilize any quadratic TFD that satisfies the following three conditions: First, the IF estimation variance given by the chosen distribution should be a continuously decreasing function of the window length, whereas the bias should be continuously increasing so that the algorithm will converge at the optimal window length for the bias-variance tradeoff, second, the time-lag kernel filter of the chosen distribution should not perform narrowband filtering in the lag direction in order to not interfere with the adaptive window in that direction; third, the distribution should perform effective cross-terms reduction while keeping high resolution in order to be efficient for multicomponent signals. A quadratic distribution with high resolution, effective cross-terms reduction and no lag filtering is proposed. The algorithm estimates multiple IF laws by using a tracking algorithm for the signal components and utilizing the property that the proposed distribution enables nonparametric component amplitude estimation. An extension of the proposed TFD consisting of the use of time-only kernels for adaptive IF estimation is also proposed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Three-dimensional discrete wavelet transform architectures

    Page(s): 2050 - 2063
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (441 KB)  

    The three-dimensional (3-D) discrete wavelet transform (DWT) suits compression applications well, allowing for better compression on 3-D data as compared with two-dimensional (2-D) methods. This paper describes two architectures for the 3-D DWT, called the 3DW-I and the 3DW-II. The first architecture (3DW-I) is based on folding, whereas the 3DW-II architecture is block-based. Potential applications for these architectures include high definition television (HDTV) and medical data compression, such as magnetic resonance imaging (MRI). The 3DW-I architecture is an implementation of the 3-D DWT similar to folded 1-D and 2-D designs. It allows even distribution of the processing load onto 3 sets of filters, with each set performing the calculations for one dimension. The control for this design is very simple, since the data are operated on in a row-column-slice fashion. Due to pipelining, all filters are utilized 100% of the time, except for the start up and wind-down times. The 3DW-II architecture uses block inputs to reduce the requirement of on-chip memory. It has a central control unit to select which coefficients to pass on to the lowpass and highpass filters. The memory on the chip will be small compared with the input size since it depends solely on the filter sizes. The 3DW-I and 3DW-II architectures are compared according to memory requirements, number of clock cycles, and processing of frames per second. The two architectures described are the first 3-D DWT architectures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast recursive basis function estimators for identification of time-varying processes

    Page(s): 1925 - 1934
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (361 KB) |  | HTML iconHTML  

    When system parameters vary rapidly with time, the weighted least squares filters are not capable of following the changes satisfactorily; some more elaborate estimation schemes, based on the method of basis functions, have to be used instead. The basis function estimators have increased tracking capabilities but are computationally very demanding. The paper introduces a new class of adaptive filters, based on the concept of postfiltering, which have improved parameter tracking capabilities that are typical of the basis function algorithms but, at the same time, have pretty low computational requirements, which is typical of the weighted least squares algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Disentangling chromosome overlaps by combining trainable shape models with classification evidence

    Page(s): 2080 - 2085
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (281 KB) |  | HTML iconHTML  

    Resolving chromosome overlaps is an unsolved problem in automated chromosome analysis. We propose a method that combines evidence from classification and shape, based on trainable shape models. In evaluation using synthesized overlaps, certain cases are resolvable using shape evidence alone, but where this is misleading, classification evidence improves performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized sampling: a variational approach .II. Applications

    Page(s): 1977 - 1985
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (361 KB)  

    For pt.I see ibid., vol.50, no.8, p.1965-76 (2000). The variational reconstruction theory from a companion paper finds a solution consistent with some linear constraints and minimizing a quadratic plausibility criterion. It is suitable for treating vector and multidimensional signals. Here, we apply the theory to a generalized sampling system consisting of a multichannel filterbank followed by a nonuniform sampling. We provide ready-made formulas, which should permit application of the technique directly to problems at hand. We comment on the practical aspects of the method, such as numerical stability and speed. We show the reconstruction formula and apply it to several practical examples, including new variational formulation of derivative sampling, landmark warping, and tomographic reconstruction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A polynomial-time algorithm for designing FIR filters with power-of-two coefficients

    Page(s): 1935 - 1941
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB)  

    This paper presents a polynomial-time algorithm for designing digital filters with coefficients expressible as sums of signed power-of-two (SPT) terms. Our proposal is based on an observation that under certain circumstances, the realization cost of a filter with SPT coefficients depends only on the total number of SPT terms, regardless of how the terms distribute among the coefficients. Therefore, the number of SPT terms for each coefficient is not necessarily limited to a fixed number. Instead, they should be allowed to vary subject to a given number of total SPT terms for the filter. This provides the possibility of finding a better set of coefficients. Our algorithm starts with initializing all the quantized coefficient values to zero. It chooses one SPT term at a time and allocates it to the currently most deserving coefficient to minimize the L distance between the SPT coefficients and their corresponding infinite wordlength values. This process of allocating the SPT terms is repeated until the total number of SPT terms for the filter is equal to a prescribed number. For each filter gain, the time complexity is a second-order polynomial in the number of coefficients to be optimized and is a first-order polynomial in the filter wordlength View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Regularity and strict identifiability in MIMO systems

    Page(s): 1831 - 1842
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (449 KB) |  | HTML iconHTML  

    We study finite impulse response (FIR) multi-input multi-output (MIMO) systems with additive noise, treating the finite-length sources and channel coefficients as deterministic unknowns, considering both regularity and identifiability. In blind estimation, the ambiguity set is large, admitting linear combinations of the sources. We show that the Fisher information matrix (FIM) is always rank deficient by at least the number of sources squared and develop necessary and sufficient conditions for the FIM to achieve its minimum nullity. Tight bounds are given on the required source data lengths to achieve minimum nullity of the FIM. We consider combinations of constraints that lead to regularity (i.e., to a full-rank FIM and, thus, a meaningful Cramer-Rao bound). Exploiting the null space of the FIM, we show how parameters must be specified to obtain a full-rank FIM, with implications for training sequence design in multisource systems. Together with constrained Cramer-Rao bounds (CRBs), this approach provides practical techniques for obtaining appropriate MIMO CRBs for many cases. Necessary and sufficient conditions are also developed for strict identifiability (ID). The conditions for strict ID are shown to be nearly equivalent to those for the FIM nullity to be minimized View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast principal component extraction by a weighted information criterion

    Page(s): 1994 - 2002
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (347 KB)  

    Principal component analysis (PCA) is an essential technique in data compression and feature extraction, and there has been much interest in developing fast PICA algorithms. On the basis of the concepts of both weighted subspace and information maximization, this paper proposes a weighted information criterion (WINC) for searching the optimal solution of a linear neural network. We analytically show that the optimum weights globally asymptotically converge to the principal eigenvectors of a stationary vector stochastic process. We establish a dependent relation of choosing the weighting matrix on statistics of the input process through the analysis of stability of the equilibrium of the proposed criterion. Therefore, we are able to reveal the constraint on the choice of a weighting matrix. We develop two adaptive algorithms based on the WINC for extracting in parallel multiple principal components. Both algorithms are able to provide adaptive step size, which leads to a significant improvement in the learning performance. Furthermore, the recursive least squares (RLS) version of WINC algorithms has a low computational complexity O(Np), where N is the input vector dimension, and p is the number of desired principal components. In fact, the WINC algorithm corresponds to a three-layer linear neural network model capable of performing, in parallel, the extraction of multiple principal components. The WINC algorithm also generalizes some well-known PCA/PSA algorithms just by adjusting the corresponding parameters. Since the weighting matrix does not require an accurate value, it facilitates the system design of the WINC algorithm for practical applications. The accuracy and speed advantages of the WINC algorithm are verified through simulations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recursive estimation of the covariance matrix of a compound-Gaussian process and its application to adaptive CFAR detection

    Page(s): 1908 - 1915
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB)  

    Adaptive detection of signals embedded in Gaussian or non-Gaussian noise is a problem of primary concern among radar engineers. We propose a recursive algorithm to estimate the structure of the covariance matrix of either a set of Gaussian vectors that share the spectral properties up to a multiplicative factor or a set of spherically invariant random vectors (SIRVs) with the same covariance matrix and possibly correlated texture components. We also assess the performance of an adaptive implementation of the normalized matched filter (NMF), relying on the newly introduced estimate, in the presence of compound-Gaussian, clutter-dominated, disturbance. In particular, it is shown that a proper initialization of the recursive procedure leads to an adaptive NMF with the constant false alarm rate (CFAR) property and that it is very effective to operate in heterogeneous environments of relevant practical interest View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A robust O(N log n) algorithm for optimal decoding of first-order Σ-Δ sequences

    Page(s): 1942 - 1950
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    An exact recursive formula is derived to describe the structure of an ideal first-order Σ-Δ output sequence as a function of its input. Specifically, it is shown that every Σ-Δ sequence generated by the constant input x∈[0, 1] can be decomposed into a shorter E-A subsequence whose input x'∈[0, 1) may be used to recover that of the original Σ-Δ sequence. This formula is applied to develop an O(N log N) algorithm for decoding an N-length sequence. Without knowledge of the modulator's initial state, it exhibits an average improvement, over all initial states, of 4.2 dB in output signal-to-noise ratio (SNR) compared with a near-optimal linear finite impulse response (FIR) filter. The regularity of the ideal first-order Σ-Δ structure with constant inputs permits the algorithm to be extended to bandlimited and noise-corrupted data. A simple error correction procedure is demonstrated, and it is shown that the recursive algorithm can outperform FIR filters on sequences of length N<64 having input SNRs as low as 30 dB View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Signal extrapolation in the real Zak space

    Page(s): 1957 - 1964
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    A new formulation of the Gerchberg-Papoulis (1974, 1975) algorithm for extrapolation of bandlimited signals was introduced. The new formulation was obtained by translating the fundamental operations of the GP procedure, the truncation, and the Fourier transform into the language of the finite Zak (1967) transform. However, the Zak transform formulation of the GP algorithm assumes complex-valued signals, whereas the GP procedure is usually applied to real signals. We present a new and more efficient algorithm that acts directly on a real signal via the real Zak transform (RZT) relation between a signal and its Hartley transform, leading, in effect, to approximately a four-fold reduction in the computational complexity of the complex Zak space approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generic framework for blind source separation in structured nonlinear models

    Page(s): 1819 - 1830
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    This paper is concerned with blind source separation in nonlinear models. Special attention is paid to separability issues. Results show that separation is impossible in the general case. However, for specific nonlinear models, the problem becomes tractable. A generic set of parametric nonlinear mixtures is considered: This set has the Lie group structure (a group structure with continuous binary operation). In the parameter set, a definition of a relative gradient is given and is used to design both batch and stochastic algorithms. For the latter, it is shown how a proper use of the relative gradient leads to equivariant adaptive algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multidimensional synchronous dataflow

    Page(s): 2064 - 2079
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB) |  | HTML iconHTML  

    Signal flow graphs with dataflow semantics have been used in signal processing system simulation, algorithm development, and real-time system design. Dataflow semantics implicitly expose function parallelism by imposing only a partial ordering constraint on the execution of functions. One particular form of dataflow called synchronous dataflow (SDF) has been quite popular in programming environments for digital signal processing (DSP) since it has strong formal properties and is ideally suited for expressing multirate DSP algorithms. However, SDF and other dataflow models use first-in first-out (FIFO) queues on the communication channels and are thus ideally suited only for one-dimensional (1-D) signal processing algorithms. While multidimensional systems can also be expressed by collapsing arrays into 1-D streams, such modeling is often awkward and can obscure potential data parallelism that might be present. SDF can be generalized to multiple dimensions; this model is called multidimensional synchronous dataflow (MDSDF). This paper presents MDSDF and shows how MDSDF can be efficiently used to model a variety of multidimensional DSP systems, as well as other types of systems that are not modeled elegantly in SDF. However, MDSDF generalizes the FIFO queues used in SDF to arrays and, thus, is capable only of expressing systems sampled on rectangular lattices. This paper also presents a generalization of MDSDF that is capable of handling arbitrary sampling lattices and lattice-changing operations such as nonrectangular decimation and interpolation. An example of a practical system is given to show the usefulness of this model. The key challenge in generalizing the MDSDF model is preserving static schedulability, which eliminates the overhead associated with dynamic scheduling, and preserving a model where data parallelism, as well as functional parallelism, is fully explicit View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two's complement quantization in periodic digital filters

    Page(s): 1951 - 1956
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (255 KB) |  | HTML iconHTML  

    The stability of periodically shift variant (PSV) filters are studied when implemented with two's complement truncation (TCT) quantization. Block form and standard (nonblock) form implementations are considered, and two sufficient conditions are established for stability. As a special case, second-order coupled-form PSV filters are then investigated under TCT quantization. Stability regions are established within the parameter space for block implementations. Examples are given to illustrate the results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting sparsity in adaptive filters

    Page(s): 1883 - 1894
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    This paper studies a class of algorithms called natural gradient (NG) algorithms. The least mean square (LMS) algorithm is derived within the NG framework, and a family of LMS variants that exploit sparsity is derived. This procedure is repeated for other algorithm families, such as the constant modulus algorithm (CMA) and decision-directed (DD) LMS. Mean squared error analysis, stability analysis, and convergence analysis of the family of sparse LMS algorithms are provided, and it is shown that if the system is sparse, then the new algorithms will converge faster for a given total asymptotic MSE. Simulations are provided to confirm the analysis. In addition, Bayesian priors matching the statistics of a database of real channels are given, and algorithms are derived that exploit these priors. Simulations using measured channels are used to show a realistic application of these algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A frequency domain blind signal separation method based on decorrelation

    Page(s): 1855 - 1865
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (375 KB) |  | HTML iconHTML  

    This paper addresses the issue of separating multiple speakers from mixtures of these that are obtained using multiple microphones in a room. An adaptive blind signal separation algorithm, which is entirely based on second-order statistics, is derived. One of the advantages of this algorithm is that no parameters need to be tuned. Moreover, an extension of the algorithm that can simultaneously deal with blind signal separation and echo cancellation is derived. Experiments with real recordings have been carried out, showing the effectiveness of the algorithm for real-world signals View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Zhi-Quan (Tom) Luo
University of Minnesota