By Topic

Signal Processing, IEEE Transactions on

Issue 9 • Date Sept. 2001

Filter Results

Displaying Results 1 - 25 of 31
  • Call for papers

    Page(s): 2162
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive noncoherent linear minimum ISI equalization for MDPSK and MDAPSK signals

    Page(s): 2018 - 2030
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    A novel noncoherent linear equalization scheme is introduced and analyzed. In contrast to previously proposed noncoherent equalization schemes, the proposed scheme is not only applicable for M-ary differential phase-shift keying (MDPSK) but also for M-ary differential amplitude/phase-shift keying (MDAPSK). The novel scheme minimizes the variance of intersymbol interference (ISI) in the equalizer output signal. The optimum equalizer coefficients may be calculated directly from an eigenvalue problem. For an efficient recursive adaptation of the equalizer coefficients, a modified least-mean-square (LMS) and a modified recursive least-squares (RLS) algorithm are proposed. It is shown that the corresponding cost function has no spurious local minima that ensures global convergence of the adaptive algorithms. Simulations confirm the good performance of the proposed noncoherent equalization scheme and its robustness against frequency offset View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Almost-sure identifiability of multidimensional harmonic retrieval

    Page(s): 1849 - 1859
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    Two-dimensional (2-D) and, more generally, multidimensional harmonic retrieval is of interest in a variety of applications, including transmitter localization and joint time and frequency offset estimation in wireless communications. The associated identifiability problem is key in understanding the fundamental limitations of parametric methods in terms of the number of harmonics that can be resolved for a given sample size. Consider a mixture of 2-D exponentials, each parameterized by amplitude, phase, and decay rate plus frequency in each dimension. Suppose that I equispaced samples are taken along one dimension and, likewise, J along the other dimension. We prove that if the number of exponentials is less than or equal to roughly IJ/4, then, assuming sampling at the Nyquist rate or above, the parameterization is almost surely identifiable. This is significant because the best previously known achievable bound was roughly (I+J)/2. For example, consider I=J=32; our result yields 256 versus 32 identifiable exponentials. We also generalize the result to N dimensions, proving that the number of exponentials that can be resolved is proportional to total sample size View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive minor component extraction with modular structure

    Page(s): 2127 - 2137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    An information criterion for adaptively estimating multiple minor eigencomponents of a covariance matrix is proposed. It is proved that the proposed criterion has a unique global minimum at the minor subspace and that all other equilibrium points are saddle points. Based on the gradient search approach of the proposed information criterion, an adaptive algorithm called adaptive minor component extraction (AMEX) is developed. The proposed algorithm automatically performs the multiple minor component extraction in parallel without the inflation procedure. Similar to the adaptive lattice filter structure, the AMEX algorithm also has the flexibility wherein increasing the number of the desired minor component does not affect the previously extracted minor components. The AMEX algorithm has a highly modular structure and the various modules operate completely in parallel without any delay. Simulation results are given to demonstrate the effectiveness of the AMEX algorithm for both the minor component analysis (MCA) and the minor subspace analysis (MSA) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum likelihood trend estimation in exponential noise

    Page(s): 2087 - 2095
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB) |  | HTML iconHTML  

    This paper considers the problem of estimating a linear trend in noise, where the noise is modeled as independent and identically distributed (i.i.d.) random process with exponential distribution. The corresponding maximum likelihood parameter estimator of the trend and noise parameters is derived, and its performance is analyzed. It turns out that the resulting maximum likelihood estimator has to solve a linear programming problem with number of constraints equal to the number of received data. A recursive form of the maximum likelihood estimator, which makes it suitable for implementation in real-time systems, is then proposed. The memory requirements of the recursive algorithm are data dependent and are investigated by simulations using both computer-generated and recorded data sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of the partitioned frequency-domain block LMS (PFBLMS) algorithm

    Page(s): 1860 - 1874
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    In this paper, we present a new analysis of the partitioned frequency-domain block least-mean-square (PFBLMS) algorithm. We analyze the matrices that control the convergence rates of the various forms of the PFBLMS algorithm and evaluate their eigenvalues for both white and colored input processes. Because of the complexity of the problem, the detailed analyses are only given for the case where the filter input is a first-order autoregressive process (AR-1). However, the results are then generalized to arbitrary processes in a heuristic way by looking into a set of numerical examples. An interesting finding (that is consistent with earlier publications) is that the unconstrained PFBLMS algorithm suffers from slow modes of convergence, which the FBLMS algorithm does not. Fortunately, however, these modes are not present in the constrained PFBLMS algorithm, A simplified version of the constrained PFBLMS algorithm, which is known as the schedule-constrained PFBLMS algorithm, is also discussed, and the reason for its similar behavior to that of its fully constrained version is explained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconstructions and predictions of nonlinear dynamical systems: a hierarchical Bayesian approach

    Page(s): 2138 - 2155
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    An attempt is made to reconstruct model nonlinear dynamical systems from scalar time series data via a hierarchical Bayesian framework. Reconstruction is performed by fitting given training data with a parameterized family of functions without overfitting. The reconstructed model dynamical systems are compared with respect to (approximated) model marginal likelihood, which is a natural Bayesian information criterion. The best model is selected with respect to this criterion and is utilized to make predictions. The results are applied to two problems: (i) chaotic time series prediction and (ii) building air-conditioning load prediction. The former is a very good class of problems for checking the abilities of prediction algorithms for at least two reasons. First, since no linear dynamical systems can admit chaotic behavior, an algorithm must capture the nonlinearities behind the time series. Second, chaotic dynamical systems are sensitive to initial conditions. More precisely, the error grows exponentially with respect to time so that crispness of capturing nonlinearities is also important. Experimental results appear to indicate that the proposed scheme can capture difficult nonlinearities behind the chaotic time series data. The latter class of problems (air conditioning load prediction) is motivated by a great amount of demand for reducing CO2 emissions associated with electric power generation. The authors won a prediction competition using the proposed algorithm; therefore, it appears to be reasonably sound View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic generation of fast discrete signal transforms

    Page(s): 1992 - 2002
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    This paper presents an algorithm that derives fast versions for a broad class of discrete signal transforms symbolically. The class includes but is not limited to the discrete Fourier and the discrete trigonometric transforms. This is achieved by finding fast sparse matrix factorizations for the matrix representations of these transforms. Unlike previous methods, the algorithm is entirely automatic and uses the defining matrix as its sole input. The sparse matrix factorization algorithm consists of two steps: first, the “symmetry” of the matrix is computed in the form of a pair of group representations; second, the representations are stepwise decomposed, giving rise to a sparse factorization of the original transform matrix. We have successfully demonstrated the method by computing automatically efficient transforms in several important cases: for the DFT, we obtain the Cooley-Tukey (1965) FFT; for a class of transforms including the DCT, type II, the number of arithmetic operations for our fast transforms is the same as for the best-known algorithms. Our approach provides new insights and interpretations for the structure of these signal transforms and the question of why fast algorithms exist. The sparse matrix factorization algorithm is implemented within the software package AREP View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-averaged subspace methods for radar clutter texture retrieval

    Page(s): 1886 - 1898
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    Subspace approaches have become popular in the last two decades for retrieving constant amplitude harmonics observed in white additive noise because they may exhibit superior resolution over the FFT-based methods, especially with short data records and closely spaced harmonics. We demonstrate that MUSIC and ESPRIT methods can also be applied when the harmonics are corrupted by white or wideband multiplicative noise. The application context is the retrieval of texture information from high resolution and low grazing angle radar clutter data affected by wideband colored speckle that is modeled as complex multiplicative noise. Texture information is fundamental for clutter cancellation and constant false alarm rate (CFAR) radar detection. A thorough numerical analysis compares the two subspace methods and validates the theoretical findings View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A pipelined architecture for the multidimensional DFT

    Page(s): 2096 - 2102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB) |  | HTML iconHTML  

    This paper presents an efficient pipelined architecture for the N m-point m-dimensional discrete Fourier transform (DFT). By using a two-level index mapping scheme that is different from the conventional decimation-in-time (DIT) or decimation-infrequency (DIF) algorithms, the conventional pipelined architecture for the one-dimensional (1-D) fast Fourier transform (FFT) can be efficiently used for the computation of higher dimensional DFTs. Compared with systolic architectures, the proposed scheme is area-efficient since the computational elements (CEs) use the minimum number of multipliers, and the number of CEs increases only linearly with respect to the dimension m. It can be easily extended to the Nm-point m-dimensional DFT with large m and/or N, and it is more flexible since the throughput can be easily varied to accommodate various area/throughput requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Existence and performance of Shalvi-Weinstein estimators

    Page(s): 2031 - 2041
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    The Shalvi-Weinstein (1990) criterion has become popular in the design of blind linear estimators of i.i.d. processes transmitted through unknown linear channels in the presence of unknown additive interference. Here, we analyze SW estimators in a general multiple-input multiple-output (MIMO) setting that allows near-arbitrary source/interference distributions and noisy noninvertible channels. The main contributions of this paper are (i) simple tests for the existence of SW estimators for the desired source and (ii) bounding expressions for the MSE of SW estimators that are a function of the minimum attainable MSE and the kurtoses of the source and interferers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prime factor algorithm for multidimensional discrete cosine transform

    Page(s): 2156 - 2161
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    A prime factor fast algorithm is proposed for the computation of the multidimensional forward and inverse discrete cosine transform (DCT). By using an example of a two-dimensional (2-D) DCT, it shows that an r-dimensional DCT can be obtained from a 2r dimensional DCT with a post-processing stage, efficient method for input/output mapping is reported to substantially reduce the computational overhead associated with the prime factor algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shifted Fourier transform-based tensor algorithms for the 2-D DCT

    Page(s): 2113 - 2126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    In this paper, tensor algorithms for calculating the two-dimensional (2-D) discrete cosine transform (DCT) are presented. The tensor approach is based on the concept of the covering revealing the transforms, which yields in particular the splitting of the shifted 2 r×2r-point Fourier and cosine transforms into 2r-13 one-dimensional (1-D) incomplete 2r-point transforms. The multiplicative complexity of the 2-D 2r×2r-point discrete cosine transforms in terms of the tensor representation is 4r3-2r-2(r 2+7r+14), which is reduced to 4r/83-2r(r2+7r+10)-20/3 when using the improved tensor algorithm. The multiplicative complexity in the general Lr×Lr case, with a prime L>2, as well as in the L1L2×L1L2 case, with arbitrary co-prime L1, L2>1, is provided. The examples of the tensor algorithms for calculating the 8×8-point DCT through 104, 88, and 84 multiplications are given in detail. Based on the proposed concept, the fast algorithm for calculating the 1-D DCT-I is also developed. The multiplicative complexity of the 2r-point DCT-I is 2r+1-(r-2)(r+5)/2-8. The comparative estimates with the known algorithms are given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of linear combination of weighted medians

    Page(s): 1940 - 1952
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB) |  | HTML iconHTML  

    This paper introduces a novel nonlinear filtering structure: the linear combination of weighted medians (LCWM). The proposed filtering scheme is modeled on the structure and design procedure of the linear-phase FIR highpass (HP) filter in that the linear-phase FIR HP filter can be obtained by changing the sign of the filter coefficients of the FIR lowpass (LP) filter in the odd positions. The HP filter can be represented as the difference between two LP subfilters that have all positive coefficients. This representation of the FIR HP filter is analogous to the difference of estimates (DoE) such as the difference of medians (DoM). The DoM is essentially a nonlinear HP filter that is commonly used in edge detection. Based on this observation, we introduce a class of LCWM filters whose output is given by a linear combination of weighted medians of the input sequence. We propose a method of designing the 1-D and 2-D LCWM filters satisfying required frequency specifications. The proposed method adopts a transformation from the FIR filter to the LCWM filter. We show that the proposed LCWM filter can offer various frequency filtering characteristics including “LP,” “bandpass (BP),” and “HP” responses View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finite-horizon robust Kalman filter design

    Page(s): 2103 - 2112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    We study the problem of finite-horizon Kalman filtering for systems involving a norm-bounded uncertain block. A new technique is presented for robust Kalman filter design. This technique involves using multiple scaling parameters that ran be optimized by solving a semidefinite program. The use of optimized scaling parameters leads to an improved design. A recursive design method that can be applied to real-time applications is also proposed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derivation of a sawtooth iterated extended Kalman smoother via the AECM algorithm

    Page(s): 1899 - 1909
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB) |  | HTML iconHTML  

    The iterated extended Kalman smoother (IEKS) is derived under expectation-maximization (EM) algorithm formalism, providing insight into the behavior of the suboptimal extended Kalman filter (EKF) and smoother (EKS). Through an investigation of smoothing algorithms that result from variants of the EM algorithm, the sawtooth iterated extended Kalman smoother (SIEKS) and its computationally inexpensive counterparts are proposed via the alternating expectation conditional maximization (AECM) algorithm. The SIEKS is guaranteed to produce a sequence estimate that moves up the likelihood surface. Numerical simulations including frequency tracking examples display the superior performance of the sawtooth EKF over the standard EKF for a range of nonlinear signal models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotically near-optimal blind estimation of multipath CDMA channels

    Page(s): 2003 - 2017
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    In this paper, correlation matching techniques are applied to estimate multipath code division multiple access (CDMA) channels. We arrange unknown multipath parameters for each of J active users in a vector. Then, the output correlation matrix is parameterized by J unknown rank one matrices, with each one formulated from the corresponding channel vector. This correlation matrix is further compared with its sample average. The resulting error can be first minimized to obtain unbiased estimates of J unknown rank one matrices in closed forms. Thus, our estimator for each channel vector is derived by singular value decomposition (SVD) on the associated rank one matrix within a scalar ambiguity. It turns out that the performance of our estimator can be improved by introducing an asymptotically optimal weighting matrix in our cost function. This weighting matrix can be estimated directly from data samples only with a small penalty on the asymptotic performance. The asymptotic covariance of our estimator is also derived and can be compared with the Cramer-Rao lower bound, both in closed forms. Simulation results show the applicability of the proposed methods and consistency with our theoretical analysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind separation of instantaneous mixtures of nonstationary sources

    Page(s): 1837 - 1848
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    Most source separation algorithms are based on a model of stationary sources. However, it is a simple matter to take advantage of possible nonstationarities of the sources to achieve separation. This paper develops novel approaches in this direction based on the principles of maximum likelihood and minimum mutual information. These principles are exploited by efficient algorithms in both the off-line case (via a new joint diagonalization procedure) and in the on-line case (via a Newton-like procedure). Some experiments showing the good performance of our algorithms and evidencing an interesting feature of our methods are presented: their ability to achieve a kind of super-efficiency. The paper concludes with a discussion contrasting separating methods for non-Gaussian and nonstationary models and emphasizing that, as a matter of fact, “what makes the algorithms work” is-strictly speaking-not the nonstationarity itself but rather the property that each realization of the source signals has a time-varying envelope View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-order balanced multiwavelets: theory, factorization, and design

    Page(s): 1918 - 1930
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    This paper deals with multiwavelets and the different properties of approximation and smoothness associated with them. In particular, we focus on the important issue of the preservation of discrete-time polynomial signals by multifilterbanks. We introduce and detail the property of balancing for higher degree discrete-time polynomial signals and link it to a very natural factorization of the refinement mask of the lowpass synthesis multifilter. This factorization turns out to be the counterpart for multiwavelets of the well-known zeros at π condition in the usual (scalar) wavelet framework. The property of balancing also proves to be central to the different issues of the preservation of smooth signals by multifilterbanks, the approximation power of finitely generated multiresolution analyses, and the smoothness of the multiscaling functions and multiwavelets. Using these new results, we describe the construction of a family of orthogonal multiwavelets with symmetries and compact support that is indexed by increasing order of balancing. In addition, we also detail, for any given balancing order, the orthogonal multiwavelets with minimum-length multifilters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Steady-state performance limitations of subband adaptive filters

    Page(s): 1982 - 1991
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    Nonperfect filterbanks used for subband adaptive filtering (SAF) are known to impose limitations on the steady-state performance of such systems. In this paper, we quantify the minimum mean-square error (MMSE) and the accuracy with which the overall SAF system can model an unknown system that it is set to identify. First, in case of MMSE limits, the error is evaluated based on a power spectral density description of aliased signal components, which is accessible via a source model for the subband signals that we derive. Approximations of the MMSE can be embedded in a signal-to-alias ratio (SAR), which is a factor by which the error power can be reduced by adaptive filtering. With simplifications, SAR only depends on the filterbanks. Second, in case of modeling, we link the accuracy of the SAF system to the filterbank mismatch in perfect reconstruction. When using modulated filterbanks, both error limits-MMSE and inaccuracy-can be linked to the prototype. We explicitly derive this for generalized DFT modulated filterbanks and demonstrate the validity of the analytical error limits and their approximations for a number of examples, whereby the analytically predicted limits of error quantities compare favorably with simulations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A conceptual framework for consistency, conditioning, and stability issues in signal processing

    Page(s): 1971 - 1981
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    The techniques employed for analyzing algorithms in numerical linear algebra have evolved significantly since the 1940s. Significant in this evolution is the partitioning of the terminology into categories in which analyses involving infinite precision effects are distinguished from analyses involving finite precision effects. Although the structure of algorithms in signal processing prevents the direct application of typical analysis techniques employed in numerical linear algebra, much can be gained in signal processing from an assimilation of the terminology found there. This paper addresses the need for a conceptual framework for discussing the computed solution from an algorithm by focusing on the distinction between a perturbation analysis of a problem or a method of solution and the stability analysis of an algorithm. A consistent approach to defining these concepts facilitates the task of assessing the numerical quality of a computed solution. This paper discusses numerical analysis techniques for signal processing algorithms and suggests terminology that is supportive of a centralized framework for distinguishing between errors propagated by the nature of the problem and errors propagated through the use of finite-precision arithmetic. By this, we mean that the numerical stability analysis of a signal processing algorithm can be simplified and the meaning of such an analysis made unequivocal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence of the Red-TOWER method for removing noise from data

    Page(s): 1931 - 1939
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB) |  | HTML iconHTML  

    By coupling the wavelet transform with a particular nonlinear shrinking function, the Red-telescopic optimal wavelet estimation of the risk (TOWER) method is introduced for removing noise from signals. It is shown that the method yields convergence of the L2 risk to the actual solution with optimal rate. Moreover the method is proved to be asymptotically efficient when the regularization parameter is selected by the generalized cross validation criterion (GCV) or the Mallows criterion. Numerical experiments based on synthetic data are provided to compare the performance of the Red-TOWER method with hard-thresholding, soft-thresholding, and neigh-coeff thresholding. Furthermore, the numerical tests are also performed when the TOWER method is applied to hard-thresholding, soft-thresholding, and neigh-coeff thresholding, for which the full convergence results are still open View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noise-constrained least mean squares algorithm

    Page(s): 1961 - 1970
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    We consider the design of an adaptive algorithm for finite impulse response channel estimation, which incorporates partial knowledge of the channel, specifically, the additive noise variance. Although the noise variance is not required for the offline Wiener solution, there are potential benefits (and limitations) for the learning behavior of an adaptive solution. In our approach, a Robbins-Monro algorithm is used to minimize the conventional mean square error criterion subject to a noise variance constraint and a penalty term necessary to guarantee uniqueness of the combined weight/multiplier solution. The resulting noise-constrained LMS (NCLMS) algorithm is a type of variable step-size LMS algorithm where the step-size rule arises naturally from the constraints. A convergence and performance analysis is carried out, and extensive simulations are conducted that compare NCLMS with several adaptive algorithms. This work also provides an appropriate framework for the derivation and analysis of other adaptive algorithms that incorporate partial knowledge of the channel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Output distributional influence function

    Page(s): 1953 - 1960
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    When a filter is being selected for an application, it is often essential to know that the behavior of the filter does not change significantly if there are small deviations from the initial assumptions. This robustness of a filter is traditionally explored by means of the influence function (IF) and change-of-variance function (CVF). However, as these are asymptotic measures, there is uncertainty of the applicability of the obtained results to the finite-length filters used in the real-world filtering applications. We present a new method called the output distributional influence function (ODIF) that examines the robustness of the finite-length filters. The method gives most extensive information about the robustness for filters with a known output distribution function. As examples, the ODIFs for the distribution function, density function, expectation, and variance are given for the well-known mean and median filters and are interpreted in detail View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens