By Topic

Signal Processing, IEEE Transactions on

Issue 5 • Date May 2010

Filter Results

Displaying Results 1 - 25 of 46
  • Table of contents

    Publication Year: 2010 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (131 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Structured Least Squares Problems and Robust Estimators

    Publication Year: 2010 , Page(s): 2453 - 2465
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1124 KB) |  | HTML iconHTML  

    A novel approach is proposed to provide robust and accurate estimates for linear regression problems when both the measurement vector and the coefficient matrix are structured and subject to errors or uncertainty. A new analytic formulation is developed in terms of the gradient flow of the residual norm to analyze and provide estimates to the regression. The presented analysis enables us to establish theoretical performance guarantees to compare with existing methods and also offers a criterion to choose the regularization parameter autonomously. Theoretical results and simulations in applications such as blind identification, multiple frequency estimation and deconvolution show that the proposed technique outperforms alternative methods in mean-squared error for a significant range of signal-to-noise ratio values. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Nonlinear Method for Robust Spectral Analysis

    Publication Year: 2010 , Page(s): 2466 - 2474
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (405 KB) |  | HTML iconHTML  

    A nonlinear spectral analyzer, called the L p-norm periodogram, is obtained by replacing the least-squares criterion with an L p-norm criterion in the regression formulation of the ordinary periodogram. In this paper, we study the statistical properties of the L p-norm periodogram for time series with continuous and mixed spectra. We derive the asymptotic distribution of the L p-norm periodogram and discover an important relationship with the so-called fractional autocorrelation spectrum that can be viewed as an alternative to the power spectrum in representing the serial dependence of a random process in the frequency domain. In comparison with the ordinary periodogram (p = 2), we show that by varying the value of p in the interval (1,2) the L p-norm periodogram can strike a balance between robustness against heavy-tailed noise, efficiency under regular conditions, and spectral leakage for time series with mixed spectra. We also show that the L p-norm periodogram can detect serial dependence of uncorrelated non-Gaussian time series that cannot be detected by the ordinary periodogram. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Null Space Pursuit: An Operator-based Approach to Adaptive Signal Separation

    Publication Year: 2010 , Page(s): 2475 - 2483
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    The operator-based signal separation approach uses an adaptive operator to separate a signal into additive subcomponents. The approach can be formulated as an optimization problem whose optimal solution can be derived analytically. However, the following issues must still be resolved: estimating the robustness of the operator's parameters and the Lagrangian multipliers, and determining how much of the information in the null space of the operator should be retained in the residual signal. To address these problems, we propose a novel optimization formula for operator-based signal separation and show that the parameters of the problem can be estimated adaptively. We demonstrate the effectiveness of the proposed method by processing several signals, including real-life signals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A General Algebraic Algorithm for Blind Extraction of One Source in a MIMO Convolutive Mixture

    Publication Year: 2010 , Page(s): 2484 - 2493
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (717 KB) |  | HTML iconHTML  

    The paper deals with the problem of blind source extraction from a multiple-input/multiple-output (MIMO) convolutive mixture. We define a new criterion for source extraction which uses higher-order contrast functions based on so called reference signals. It generalizes existing reference-based contrasts. In order to optimize the new criterion, we propose a general algebraic algorithm based on best rank-1 tensor approximation. Computer simulations illustrate the good behavior and the interest of our algorithm in comparison with other approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correlation and Spectral Methods for Determining Uncertainty in Propagating Discontinuities

    Publication Year: 2010 , Page(s): 2494 - 2508
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1612 KB) |  | HTML iconHTML  

    The accurate determination of the speed of a propagating disturbance is important for a number of applications. A nonstationary cross-spectral density phase (NCSDP) technique was developed to provide a statistical estimate of the propagation time of sharp discontinuities such as steps or spikes that model shock or detonation waves. The uncertainty of the phase estimate is dependent on the coherence between the signals. For discrete implementation of the NCSDP technique, a ??weighted-resetting-unwrap?? of the phase angle was proposed to discard values of the coherence below a threshold value, that is, only the unwrapped phase angle above the threshold was accepted. In addition, an envelop function was used which improved the technique. The technique was found to be unsuitable for step disturbances but was more effective in estimating the time delay with a small standard deviation if the sharp disturbance also showed a rapid decay. The method was applied to shock and detonation waves. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Kalman Filter Based on a Generalized Maximum-Likelihood-Type Estimator

    Publication Year: 2010 , Page(s): 2509 - 2520
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB) |  | HTML iconHTML  

    A new robust Kalman filter is proposed that detects and bounds the influence of outliers in a discrete linear system, including those generated by thick-tailed noise distributions such as impulsive noise. Besides outliers induced in the process and observation noises, we consider in this paper a new type called structural outliers. For a filter to be able to counter the effect of these outliers, observation redundancy in the system is necessary. We have therefore developed a robust filter in a batch-mode regression form to process the observations and predictions together, making it very effective in suppressing multiple outliers. A key step in this filter is a new prewhitening method that incorporates a robust multivariate estimator of location and covariance. The other main step is the use of a generalized maximum likelihood-type (GM) estimator based on Schweppe's proposal and the Huber function, which has a high statistical efficiency at the Gaussian distribution and a positive breakdown point in regression. The latter is defined as the largest fraction of contamination for which the estimator yields a finite maximum bias under contamination. This GM-estimator enables our filter to bound the influence of residual and position, where the former measures the effects of observation and innovation outliers and the latter assesses that of structural outliers. The estimator is solved via the iteratively reweighted least squares (IRLS) algorithm, in which the residuals are standardized utilizing robust weights and scale estimates. Finally, the state estimation error covariance matrix of the proposed GM-Kalman filter is derived from its influence function. Simulation results revealed that our filter compares favorably with the H??-filter in the presence of outliers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Continuous-Time Linear System Identification Method for Slowly Sampled Data

    Publication Year: 2010 , Page(s): 2521 - 2533
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB) |  | HTML iconHTML  

    Both direct and indirect methods exist for identifying continuous-time linear systems. A direct method estimates continuous-time input and output signals from their samples and then use them to obtain a continuous-time model, whereas an indirect method estimates a discrete-time model first. Both methods rely on fast sampling to ensure good accuracy. In this paper, we propose a more direct method where a continuous-time linear model is directly fitted to the available samples. This method produces an exact model asymptotically, modulo some possible aliasing ambiguity, even when the sampling rate is relatively slow. We also state conditions under which the aliasing ambiguity can be resolved, and we provide experiments showing that the proposed method is a valid option when a slow sampling frequency must be used but a large number of samples is available. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variance-Constrained {cal H}_{\infty } Filtering for a Class of Nonlinear Time-Varying Systems With Multiple Missing Measurements: The Finite-Horizon Case

    Publication Year: 2010 , Page(s): 2534 - 2543
    Cited by:  Papers (38)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (542 KB) |  | HTML iconHTML  

    This paper is concerned with the robust H finite-horizon filtering problem for a class of uncertain nonlinear discrete time-varying stochastic systems with multiple missing measurements and error variance constraints. All the system parameters are time-varying and the uncertainty enters into the state matrix. The measurement missing phenomenon occurs in a random way, and the missing probability for each sensor is governed by an individual random variable satisfying a certain probabilistic distribution in the interval . The stochastic nonlinearities under consideration here are described by statistical means which can cover several classes of well-studied nonlinearities. Sufficient conditions are derived for a finite-horizon filter to satisfy both the estimation error variance constraints and the prescribed H performance requirement. These conditions are expressed in terms of the feasibility of a series of recursive linear matrix inequalities (RLMIs). Simulation results demonstrate the effectiveness of the developed filter design scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FIR Smoothing of Discrete-Time Polynomial Signals in State Space

    Publication Year: 2010 , Page(s): 2544 - 2555
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1130 KB) |  | HTML iconHTML  

    We address a smoothing finite impulse response (FIR) filtering solution for deterministic discrete-time signals represented in state space with finite-degree polynomials. The optimal smoothing FIR filter is derived in an exact matrix form requiring the initial state and the measurement noise covariance function. The relevant unbiased solution is represented both in the matrix and polynomial forms that do not involve any knowledge about measurement noise and initial state. The unique l-degree unbiased gain and the noise power gain are derived for a general case. The widely used low-degree gains are investigated in detail. As an example, the best linear fit is provided for a two-state clock error model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systematic Construction of Real Lapped Tight Frame Transforms

    Publication Year: 2010 , Page(s): 2556 - 2567
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (945 KB) |  | HTML iconHTML  

    We present a constructive algorithm for the design of real lapped equal-norm tight frame transforms. These transforms can be efficiently implemented through filter banks and have recently been proposed as a redundant counterpart to lapped orthogonal transforms, as well as an infinite-dimensional counterpart to harmonic tight frames. The proposed construction consists of two parts: First, we design a large class of new real lapped orthogonal transforms derived from submatrices of the discrete Fourier transform. Then, we seed these to obtain real lapped tight frame transforms corresponding to tight, equal-norm frames. We identify those frames that are maximally robust to erasures, and show that our construction leads to a large class of new lapped orthogonal transforms as well as new lapped tight frame transforms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Short-Time Fractional Fourier Transform and Its Applications

    Publication Year: 2010 , Page(s): 2568 - 2580
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1664 KB) |  | HTML iconHTML  

    The fractional Fourier transform (FRFT) is a potent tool to analyze the chirp signal. However, it fails in locating the fractional Fourier domain (FRFD)-frequency contents which is required in some applications. The short-time fractional Fourier transform (STFRFT) is proposed to solve this problem. It displays the time and FRFD-frequency information jointly in the short-time fractional Fourier domain (STFRFD). Two aspects of its performance are considered: the 2-D resolution and the STFRFD support. The time-FRFD-bandwidth product (TFBP) is defined to measure the resolvable area and the STFRFD support. The optimal STFRFT is obtained with the criteria that maximize the 2-D resolution and minimize the STFRFD support. Its inverse transform, properties and computational complexity are presented. Two applications are discussed: the estimations of the time-of-arrival (TOA) and pulsewidth (PW) of chirp signals, and the STFRFD filtering. Simulations verify the validity of the proposed algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Superposition Frames for Adaptive Time-Frequency Analysis and Fast Reconstruction

    Publication Year: 2010 , Page(s): 2581 - 2596
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1263 KB) |  | HTML iconHTML  

    In this paper, we introduce a broad family of adaptive, linear time-frequency representations termed superposition frames, and show that they admit desirable fast overlap-add reconstruction properties akin to standard short-time Fourier techniques. This approach stands in contrast to many adaptive time-frequency representations in the existing literature, which, while more flexible than standard fixed-resolution approaches, typically fail to provide for efficient reconstruction and often lack the regular structure necessary for precise frame-theoretic analysis. Our main technical contributions come through the development of properties which ensure that our superposition construction provides for a numerically stable, invertible signal representation. Our primary algorithmic contributions come via the introduction and discussion of two signal adaptation schemes based on greedy selection and dynamic programming, respectively. We conclude with two short enhancement examples that serve to highlight potential applications of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noninvertible Gabor Transforms

    Publication Year: 2010 , Page(s): 2597 - 2612
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (855 KB) |  | HTML iconHTML  

    Time-frequency analysis, such as the Gabor transform, plays an important role in many signal processing applications. The redundancy of such representations is often directly related to the computational load of any algorithm operating in the transform domain. To reduce complexity, it may be desirable to increase the time and frequency sampling intervals beyond the point where the transform is invertible, at the cost of an inevitable recovery error. In this paper we initiate the study of recovery procedures for noninvertible Gabor representations. We propose using fixed analysis and synthesis windows, chosen e.g., according to implementation constraints, and to process the Gabor coefficients prior to synthesis in order to shape the reconstructed signal. We develop three methods for signal recovery. The first follows from the consistency requirement, namely that the recovered signal has the same Gabor representation as the input signal. The second, is based on minimization of a worst-case error. Last, we develop a recovery technique based on the assumption that the input signal lies in some subspace of L2 . We show that for each of the criteria, the manipulation of the transform coefficients amounts to a 2D twisted convolution, which we show how to perform using a filter-bank. When the undersampling factor is an integer, the processing reduces to standard 2D convolution. We provide simulation results demonstrating the advantages and weaknesses of each of the algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Best Basis Compressed Sensing

    Publication Year: 2010 , Page(s): 2613 - 2622
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1044 KB) |  | HTML iconHTML  

    This paper proposes a best basis extension of compressed sensing recovery. Instead of regularizing the compressed sensing inverse problem with a sparsity prior in a fixed basis, our framework makes use of sparsity in a tree-structured dictionary of orthogonal bases. A new iterative thresholding algorithm performs both the recovery of the signal and the estimation of the best basis. The resulting reconstruction from compressive measurements optimizes the basis to the structure of the sensed signal. Adaptivity is crucial to capture the regularity of complex natural signals. Numerical experiments on sounds and geometrical images indeed show that this best basis search improves the recovery with respect to fixed sparsity priors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Estimation and Detection in Homogeneous Spaces

    Publication Year: 2010 , Page(s): 2623 - 2635
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (845 KB) |  | HTML iconHTML  

    This paper presents estimation and detection techniques in homogeneous spaces that are optimal under the squared error loss function. The data is collected on a manifold which forms a homogeneous space under the transitive action of a compact Lie group. Signal estimation problems are addressed by formulating Wiener-Hopf equations for homogeneous spaces. The coefficient functions of these equations are the signal correlations which are assumed to be known. The resulting coupled integral equations on the manifold are converted to Wiener-Hopf convolutional integral equations on the group. These are solved using the Peter-Weyl theory of Fourier transform on compact Lie groups. The computational complexity of this algorithm is reduced using the bi-invariance of the correlations with respect to a stabilizer subgroup. The theory of matched filtering for isotropic signal fields is developed for signal classification where given a set of template signals on the manifold and a noisy test signal, the objective is to optimally detect the template buried in the test signal. This is accomplished by designing a filter on the manifold that maximizes the signal-to-noise-ratio (SNR) of the filtered output. An expression for the SNR is obtained as a ratio of quadratic forms expressed as Haar integrals over the transformation group. These integrals are expressed in the Fourier domain as infinite sums over the irreducible representations. Simplification of these sums is achieved by invariance properties of the signal function and the noise correlation function. The Wiener filter and matched filter are developed for an abstract homogeneous space and then specialized to the case of spherical signals under the action of the rotation group. Applications of these algorithms to denoising of 3D surface data, visual navigation with omnidirectional camera and detection of compact embedded objects in the stochastic background are discussed with experimental results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recovering Signals From Lowpass Data

    Publication Year: 2010 , Page(s): 2636 - 2646
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    The problem of recovering a signal from its low frequency components occurs often in practical applications due to the lowpass behavior of many physical systems. Here, we study in detail conditions under which a signal can be determined from its low-frequency content. We focus on signals in shift-invariant spaces generated by multiple generators. For these signals, we derive necessary conditions on the cutoff frequency of the lowpass filter as well as necessary and sufficient conditions on the generators such that signal recovery is possible. When the lowpass content is not sufficient to determine the signal, we propose appropriate pre-processing that can improve the reconstruction ability. In particular, we show that modulating the signal with one or more mixing functions prior to lowpass filtering, can ensure the recovery of the signal in many cases, and reduces the necessary bandwidth of the filter. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some Aspects of Band-Limited Extrapolations

    Publication Year: 2010 , Page(s): 2647 - 2653
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1082 KB) |  | HTML iconHTML  

    In this paper, some problems in band-limited extrapolations are discussed. These aspects include the computation of the inverse Fourier transform, the accuracy of the extrapolation, the ill-posedness, and regularization method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dithered A/D Conversion of Smooth Non-Bandlimited Signals

    Publication Year: 2010 , Page(s): 2654 - 2666
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (583 KB) |  | HTML iconHTML  

    The classical method for sampling a smooth non-bandlimited signal requires a lowpass anti-aliasing filter. In applications like distributed sampling where sampling and quantization operations precede filtering, aliasing-error is inevitable. Motivated by such applications, the sampling of smooth and bounded non-bandlimited signals whose spectra have a finite absolute first moment, without the use of an analog anti-alias lowpass filter, is studied in a centralized setup. Upper bounds for the distortion-rate function are derived by first upper-bounding the distortion with a linear combination of errors due to aliasing and quantization and then balancing their contributions by selecting an appropriate reconstruction bandwidth. For a class of dithered sampling methods, it is shown that a lower quantizer-precision can be traded for a higher sampling-rate without affecting the realizable high-rate asymptotic distortion-rate characteristics. These results are applied to signals with exponentially and polynomially decaying spectral characteristics and truncated bandlimited signals to uncover the realizable distortion-rate characteristics for these signal classes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Linear Cost Algorithm to Compute the Discrete Gabor Transform

    Publication Year: 2010 , Page(s): 2667 - 2674
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (573 KB) |  | HTML iconHTML  

    In this paper, we propose an alternative efficient method to calculate the Gabor coefficients of a signal given a synthesis window with a support of size much lesser than the length of the signal. The algorithm uses the canonical dual of the window (which does not need to be calculated beforehand) and achieves a computational cost that is linear with the signal length in both analysis and synthesis. This is done by exploiting the block structure of the matrices and using an ad hoc Cholesky decomposition of the Gabor frame matrix. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Orthogonal Component Analysis for Sparse Representation

    Publication Year: 2010 , Page(s): 2675 - 2685
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1453 KB) |  | HTML iconHTML  

    This paper addresses the problem of identifying a lower dimensional space where observed data can be sparsely represented. This undercomplete dictionary learning task can be formulated as a blind separation problem of sparse sources linearly mixed with an unknown orthogonal mixing matrix. This issue is formulated in a Bayesian framework. First, the unknown sparse sources are modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted mixture of an atom at zero and a Gaussian distribution is proposed as prior distribution for the unobserved sources. A noninformative prior distribution defined on an appropriate Stiefel manifold is elected for the mixing matrix. The Bayesian inference on the unknown parameters is conducted using a Markov chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is designed to generate samples asymptotically distributed according to the joint posterior distribution of the unknown model parameters and hyperparameters. These samples are then used to approximate the joint maximum a posteriori estimator of the sources and mixing matrix. Simulations conducted on synthetic data are reported to illustrate the performance of the method for recovering sparse representations. An application to sparse coding on undercomplete dictionary is finally investigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Active Learning and Basis Selection for Kernel-Based Linear Models: A Bayesian Perspective

    Publication Year: 2010 , Page(s): 2686 - 2700
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1085 KB) |  | HTML iconHTML  

    We develop an active learning algorithm for kernel-based linear regression and classification. The proposed greedy algorithm employs a minimum-entropy criterion derived using a Bayesian interpretation of ridge regression. We assume access to a matrix, ? ? BBRN?N, for which the (i,j)th element is defined by the kernel function K(?i,?j) ? BBR, with the observed data ?i ? BBRd. We seek a model, M:?i? yi, where yi is a real-valued response or integer-valued label, which we do not have access to a priori. To achieve this goal, a submatrix, ?Il,Ib ? BBRn?m, is sought that corresponds to the intersection of n rows and m columns of ?, indexed by the sets Il and Ib, respectively. Typically m ? N and n ? N. We have two objectives: (i) Determine the m columns of ?, indexed by the set Ib, that are the most informative for building a linear model, M: [1 ?i,Ib]T ? yi , without any knowledge of {yi}i=1N and (ii) using active learning, sequentially determine which subset of n elements of {yi}i=1N should be acquired; both stopping values, |Ib| = m and |Il| = n, are also to be inferred from the data. These steps are taken with the goal of minimizing the uncertainty of the model parameters, x, as measured by the differential entropy of its posterior distribution. The parameter vector x ? BBRm, as well as the model bias ? ? BBR, is then learned from the resulting problem, yIl = ?Il,Ibx + ?1+?. The remaining N-n responses/labels not included in yIl can be inferred by applying x to the remaining N-n rows of ? :, Ib. We show experim- - ental results for several regression and classification problems, and compare to other active learning methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Gaussian Tree Models: Analysis of Error Exponents and Extremal Structures

    Publication Year: 2010 , Page(s): 2701 - 2714
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (789 KB) |  | HTML iconHTML  

    The problem of learning tree-structured Gaussian graphical models from independent and identically distributed (i.i.d.) samples is considered. The influence of the tree structure and the parameters of the Gaussian distribution on the learning rate as the number of samples increases is discussed. Specifically, the error exponent corresponding to the event that the estimated tree structure differs from the actual unknown tree structure of the distribution is analyzed. Finding the error exponent reduces to a least-squares problem in the very noisy learning regime. In this regime, it is shown that the extremal tree structure that minimizes the error exponent is the star for any fixed set of correlation coefficients on the edges of the tree. If the magnitudes of all the correlation coefficients are less than 0.63, it is also shown that the tree structure that maximizes the error exponent is the Markov chain. In other words, the star and the chain graphs represent the hardest and the easiest structures to learn in the class of tree-structured Gaussian graphical models. This result can also be intuitively explained by correlation decay: pairs of nodes which are far apart, in terms of graph distance, are unlikely to be mistaken as edges by the maximum-likelihood estimator in the asymptotic regime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tensor-Based Spatial Smoothing (TB-SS) Using Multiple Snapshots

    Publication Year: 2010 , Page(s): 2715 - 2728
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2383 KB)  

    Tensor-based spatial smoothing (TB-SS) is a preprocessing technique for subspace-based parameter estimation of damped and undamped harmonics. In TB-SS, multichannel data is packed into a measurement tensor. We propose a tensor-based signal subspace estimation scheme that exploits the multidimensional invariance property exhibited by the highly structured measurement tensor. In the presence of noise, a tensor-based subspace estimate obtained via TB-SS is a better estimate of the desired signal subspace than the subspace estimate obtained by, for example, the singular value decomposition of a spatially smoothed matrix or a multilinear algebra approach reported in the literature. Thus, TB-SS in conjunction with subspace-based parameter estimation schemes performs significantly better than subspace-based parameter estimation algorithms applied to the existing matrix-based subspace estimate. Another advantage of TB-SS over the conventional SS is that TB-SS is insensitive to changes in the number of samples per subarray provided that the number of subarrays is greater than the number of harmonics. In this paper, we present, as an example, TB-SS in conjunction with ESPRIT-type algorithms for the parameter estimation of one-dimensional (1-D) damped and undamped harmonics. A closed form expression of the stochastic Crame??r-Rao bound (CRB) for the 1-D damped harmonic retrieval problem is also derived. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens