By Topic

Signal Processing, IEEE Transactions on

Issue 12 • Date Dec. 2012

Filter Results

Displaying Results 1 - 25 of 64
  • Table of contents

    Publication Year: 2012 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (203 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Publication Year: 2012 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (136 KB)  
    Freely Available from IEEE
  • Multiple Quadrature Kalman Filtering

    Publication Year: 2012 , Page(s): 6125 - 6137
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3263 KB) |  | HTML iconHTML  

    Bayesian filtering is a statistical approach that naturally appears in many signal processing problems. Ranging from Kalman filter to particle filters, there is a plethora of alternatives depending on model assumptions. With the exception of very few tractable cases, one has to resort to suboptimal methods due to the inability to analytically compute the Bayesian recursion in general dynamical systems. This is why it has attracted the attention of many researchers in order to develop efficient algorithms to implement it. We focus our interest into a recently developed algorithm known as the Quadrature Kalman filter (QKF). Under the Gaussian assumption, the QKF can tackle arbitrary nonlinearities by resorting to the Gauss-Hermite quadrature rules. However, its complexity increases exponentially with the state-space dimension. In this paper we study a complexity reduction technique for the QKF based on the partitioning of the state-space, referred to as the Multiple QKF. We prove that partitioning schemes can effectively be used to reduce the curse of dimensionality in the QKF. Simulation results are also provided to show that a nearly-optimal performance can be attained, while drastically reducing the computational complexity with respect to state-of-the-art algorithms that are able to deal with such nonlinear filtering problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Moment Estimation Using a Marginalized Transform

    Publication Year: 2012 , Page(s): 6138 - 6150
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2726 KB) |  | HTML iconHTML  

    We present a method for estimating mean and covariance of a transformed Gaussian random variable. The method is based on evaluations of the transforming function and resembles the unscented transform and Gauss-Hermite integration in that respect. The information provided by the evaluations is used in a Bayesian framework to form a posterior description of the parameters in a model of the transforming function. Estimates are then derived by marginalizing these parameters from the analytical expression of the mean and covariance. An estimation algorithm, based on the assumption that the transforming function can be described using Hermite polynomials, is presented and applied to the non-linear filtering problem. The resulting marginalized transform (MT) estimator is compared to the cubature rule, the unscented transform and the divided difference estimator. The evaluations show that the presented method performs better than these methods, more specifically in estimating the covariance matrix. Contrary to the unscented transform, the resulting approximation of the covariance matrix is guaranteed to be positive-semidefinite. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimax-Optimal Hypothesis Testing With Estimation-Dependent Costs

    Publication Year: 2012 , Page(s): 6151 - 6165
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5241 KB) |  | HTML iconHTML  

    This paper introduces a novel framework for hypothesis testing in the presence of unknown parameters. The objective is to decide between two hypotheses, where each one involves unknown parameters that are of interest to be estimated. The existing approaches on detection and estimation place the primary emphasis on the detection part by solving this part optimally and treating the estimation part suboptimally. The proposed framework, in contrast, treats both problems simultaneously and in a jointly optimal manner. The resulting test exhibits the flexibility to strike any desired balance between the detection and estimation accuracies. By exploiting this flexibility, depending on the application in hand, this new technique offers the freedom to put different emphasis on the detection and estimation subproblems. The proposed optimal joint detection and estimation framework is also extended to multiple hypothesis tests. We apply the proposed test to the problem of detecting and estimating periodicities in DNA sequences and demonstrate the advantages of the new framework compared to the classical Neyman-Pearson approach and the GLRT. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Location-Penalized Maximum Likelihood Estimator for Bearing-Only Target Localization

    Publication Year: 2012 , Page(s): 6166 - 6181
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4984 KB) |  | HTML iconHTML  

    In this paper, we present a location-penalized maximum likelihood (LPML) estimator for bearing only target localization. We develop a new penalized maximum likelihood cost function by transforming the variables of target position and bearings. The new penalized likelihood function can also be recognized as a posterior distribution under a Bayesian framework by penalizing a prior. We give analysis of the asymptotic properties and show that both traditional bearing maximum likelihood (TBML) and LPML estimators are asymptotically efficient estimators. To compare the performances of the TBML and LPML estimators, we analyze the Cramér-Rao lower bound (CRLB) of the two estimators and show that the bound of the LPML estimator is lower than that of the TBML estimator. Extensive simulations are performed. It is observed that the new LPML algorithm consistently outperforms other well-known algorithms. Field experiments are also conducted by applying this method to localize a vehicle using real-world data acquired by an acoustic array sensor network. The new LPML algorithm demonstrates superior performance in all the field experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Geodesic Convexity and Covariance Estimation

    Publication Year: 2012 , Page(s): 6182 - 6189
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1276 KB) |  | HTML iconHTML  

    Geodesic convexity is a generalization of classical convexity which guarantees that all local minima of g-convex functions are globally optimal. We consider g-convex functions with positive definite matrix variables, and prove that Kronecker products, and logarithms of determinants are g-convex. We apply these results to two modern covariance estimation problems: robust estimation in scaled Gaussian distributions, and Kronecker structured models. Maximum likelihood estimation in these settings involves non-convex minimizations. We show that these problems are in fact g-convex. This leads to straight forward analysis, allows the use of standard optimization methods and paves the road to various extensions via additional g-convex regularization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Radar Maneuvering Target Motion Estimation Based on Generalized Radon-Fourier Transform

    Publication Year: 2012 , Page(s): 6190 - 6201
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3665 KB) |  | HTML iconHTML  

    The slant range of a radar maneuvering target is usually modeled as a multivariate function in terms of its illumination time and multiple motion parameters. This multivariate range function includes the modulations on both the envelope and the phase of an echo of the coherent radar target and provides the foundation for radar target motion estimation. In this paper, the maximum likelihood estimators (MLE) are derived for motion estimation of a maneuvering target based on joint envelope and phase measurement, phase-only measurement and envelope-only measurement in case of high signal-to-noise ratio (SNR), respectively. It is shown that the proposed MLEs are to search the maximums of the outputs of the proposed generalized Radon-Fourier transform (GRFT), generalized Radon transform (GRT) and generalized Fourier transform (GFT), respectively. Furthermore, by approximating the slant range function by a high-order polynomial, the inherent accuracy limitations, i.e., the Cramer-Rao low bounds (CRLB), and some analysis are given for high order motion parameter estimations in different scenarios. Finally, some numerical experimental results are provided to demonstrate the effectiveness of the proposed methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized Orthogonal Matching Pursuit

    Publication Year: 2012 , Page(s): 6202 - 6216
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4411 KB) |  | HTML iconHTML  

    As a greedy algorithm to recover sparse signals from compressed measurements, orthogonal matching pursuit (OMP) algorithm has received much attention in recent years. In this paper, we introduce an extension of the OMP for pursuing efficiency in reconstructing sparse signals. Our approach, henceforth referred to as generalized OMP (gOMP), is literally a generalization of the OMP in the sense that multiple N indices are identified per iteration. Owing to the selection of multiple “correct” indices, the gOMP algorithm is finished with much smaller number of iterations when compared to the OMP. We show that the gOMP can perfectly reconstruct any K-sparse signals (K >; 1), provided that the sensing matrix satisfies the RIP with δNK <; [(√N)/(√K+3√N)]. We also demonstrate by empirical simulations that the gOMP has excellent recovery performance comparable to l1-minimization technique with fast processing speed and competitive computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation Over Adaptive Networks

    Publication Year: 2012 , Page(s): 6217 - 6234
    Cited by:  Papers (48)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4605 KB) |  | HTML iconHTML  

    Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies. The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic Analysis of a Stable Normalized Least Mean Fourth Algorithm for Adaptive Noise Canceling With a White Gaussian Reference

    Publication Year: 2012 , Page(s): 6235 - 6244
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2497 KB) |  | HTML iconHTML  

    The least mean fourth (LMF) algorithm has several stability problems. Its stability depends on the variance and distribution type of the adaptive filter input, the noise variance, and the initialization of the filter weights. A global solution to these stability problems was presented recently for a normalized LMF (NLMF) algorithm. Here, a stochastic analysis of the mean-square deviation (MSD) of the globally stable NLMF algorithm is provided. The analysis is done in the context of adaptive noise canceling with a white Gaussian reference input and Gaussian, binary, and uniform desired signals. The analytical model is shown to accurately predict the results of Monte Carlo simulations. Comparisons of the NLMF and NLMS algorithms are then made for various parameter selections. It is then shown under what conditions the NLMF algorithm is superior to NLMS algorithm for adaptive noise canceling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fixed-Point Analysis and Parameter Selections of MSR-CORDIC With Applications to FFT Designs

    Publication Year: 2012 , Page(s): 6245 - 6256
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2633 KB) |  | HTML iconHTML  

    Mixed-scaling-rotation (MSR) coordinate rotation digital computer (CORDIC) is an attractive approach to synthesizing complex rotators. This paper presents the fixed-point error analysis and parameter selections of MSR-CORDIC with applications to the fast Fourier transform (FFT). First, the fixed-point mean squared error of the MSR-CORDIC is analyzed by considering both the angle approximation error and signal round-off error incurred in the finite precision arithmetic. The signal to quantization noise ratio (SQNR) of the output of the FFT synthesized using MSR-CORDIC is thereafter estimated. Based on these analyses, two different parameter selection algorithms of MSR-CORDIC are proposed for general and dedicated MSR-CORDIC structures. The proposed algorithms minimize the number of adders and word-length when the SQNR of the FFT output is constrained. Design examples show that the FFT designed by the proposed method exhibits a lower hardware complexity than existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exact Wavelets on the Ball

    Publication Year: 2012 , Page(s): 6257 - 6269
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3665 KB) |  | HTML iconHTML  

    We develop an exact wavelet transform on the three-dimensional ball (i.e. on the solid sphere), which we name the flaglet transform. For this purpose we first construct an exact transform on the radial half-line using damped Laguerre polynomials and develop a corresponding quadrature rule. Combined with the spherical harmonic transform, this approach leads to a sampling theorem on the ball and a novel three-dimensional decomposition which we call the Fourier-Laguerre transform. We relate this new transform to the well-known Fourier-Bessel decomposition and show that band-limitedness in the Fourier-Laguerre basis is a sufficient condition to compute the Fourier-Bessel decomposition exactly. We then construct the flaglet transform on the ball through a harmonic tiling, which is exact thanks to the exactness of the Fourier-Laguerre transform (from which the name flaglets is coined). The corresponding wavelet kernels are well localised in real and Fourier-Laguerre spaces and their angular aperture is invariant under radial translation. We introduce a multiresolution algorithm to perform the flaglet transform rapidly, while capturing all information at each wavelet scale in the minimal number of samples on the ball. Our implementation of these new tools achieves floating-point precision and is made publicly available. We perform numerical experiments demonstrating the speed and accuracy of these libraries and illustrate their capabilities on a simple denoising example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Message-Passing De-Quantization With Applications to Compressed Sensing

    Publication Year: 2012 , Page(s): 6270 - 6281
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2175 KB) |  | HTML iconHTML  

    Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal-sometimes greatly so. This paper develops message-passing de-quantization (MPDQ) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular. The algorithm is based on generalized approximate message passing (GAMP), a recently-developed Gaussian approximation of loopy belief propagation for estimation with linear transforms and nonlinear componentwise-separable output channels. For MPDQ, scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for high-dimensional estimation problems involving non-regular scalar quantization. The algorithm is computationally simple and can incorporate arbitrary separable priors on the input vector including sparsity-inducing priors that arise in the context of compressed sensing. Moreover, under the assumption of a Gaussian measurement matrix with i.i.d. entries, the asymptotic error performance of MPDQ can be accurately predicted and tracked through a simple set of scalar state evolution equations. We additionally use state evolution to design MSE-optimal scalar quantizers for MPDQ signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers. In particular, our results show that non-regular quantization can greatly improve rate-distortion performance in some problems with oversampling or with undersampling combined with a sparsity-inducing prior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Realization of 3-D Separable-Denominator Digital Filters With Low l_2 -Sensitivity

    Publication Year: 2012 , Page(s): 6282 - 6293
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4837 KB) |  | HTML iconHTML  

    Three-dimensional (3-D) digital filters find applications in a variety of image and video signal processing problems. This paper presents a coefficient-sensitivity analysis for a wide class of 3-D digital filters with separable denominators in local state space that leads to an analytic formulation for sensitivity minimization, and to present two solution techniques for the sensitivity minimization problem at hand. To this end, a vector-matrix-vector decomposition of a given 3-D transfer function that separates the three variables and leads to a state-space realization in a form convenient for subsequent analysis. An l2-sensitivity analysis is then performed. The result is a computationally tractable formula of the overall l2-sensitivity for 3-D digital filters. The l2-sensitivity is minimized subject to l2-scaling constraints by using one of the two solution methods proposed-one relaxes the constraints into a single trace constraint and solves the relaxed problem with an effective matrix iteration scheme; while the other converts the contained optimization problem at hand into an unconstrained problem and solves it using a quasi-Newton algorithm. A case study is presented to illustrate the validity and effectiveness of the proposed techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Closed Formula for the Univariate Hermite Interpolating Polynomial of Total Degree and its Application in Medical Image Slice Interpolation

    Publication Year: 2012 , Page(s): 6294 - 6304
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2187 KB) |  | HTML iconHTML  

    This work investigates the usefulness of univariate Hermite interpolation of the total degree (HTD) for a biomedical signal processing task: slice interpolation in a variety of medical imaging modalities. The HTD is an algebraically demanding interpolation method that utilizes information of the values of the signal to be interpolated at distinct support positions, as well as the values of its derivatives up to a maximum available order. First a novel closed form solution for the univariate Hermite interpolating polynomial is presented for the general case of arbitrarily spaced support points and its computational and algebraic complexity is compared to that of the classical expression of the Hermite interpolating polynomial. Then, an implementation is proposed for the case of equidistant support positions with computational complexity comparable to any convolution-based interpolation method. We assess the proposed implementation of HTD interpolation with equidistant support points in the task of slice interpolation, which is usually treated as a one-dimensional problem. We performed a large number of interpolation experiments for 220 Magnetic Resonance Imaging (MRI) datasets and 50 Computer Tomography (CT) datasets and compared the proposed HTD implementation to several other well established interpolation techniques. In our experiments, we approximated the signal derivatives using finite differences, however the proposed HTD can accommodate any type of derivative calculation. Results show that the HTD interpolation outperforms the other interpolation methods under comparison, in terms of root mean square error (RMSE), in every one of the interpolation experiments, resulting in higher accuracy interpolated images. Finally, the behavior of the HTD with respect to its controlling parameters is explored and its computational complexity is determined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Polynomial Smoothing of Time Series With Additive Step Discontinuities

    Publication Year: 2012 , Page(s): 6305 - 6318
    Cited by:  Papers (3)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2204 KB)  

    This paper addresses the problem of estimating simultaneously a local polynomial signal and an approximately piecewise constant signal from a noisy additive mixture. The approach developed in this paper synthesizes the total variation filter and least-square polynomial signal smoothing into a unified problem formulation. The method is based on formulating an l1-norm regularized inverse problem. A computationally efficient algorithm, based on variable splitting and the alternating direction method of multipliers (ADMM), is presented. Algorithms are derived for both unconstrained and constrained formulations. The method is illustrated on experimental data involving the detection of nano-particles with applications to real-time virus detection using a whispering-gallery mode detector. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multilevel Iterated-Shrinkage Approach to l_{1} Penalized Least-Squares Minimization

    Publication Year: 2012 , Page(s): 6319 - 6329
    Cited by:  Papers (1)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1970 KB)  

    The area of sparse approximation of signals is drawing tremendous attention in recent years. Typically, sparse solutions of underdetermined linear systems of equations are required. Such solutions are often achieved by minimizing an l1 penalized least squares functional. Various iterative-shrinkage algorithms have recently been developed and are quite effective for handling these problems, often surpassing traditional optimization techniques. In this paper, we suggest a new iterative multilevel approach that reduces the computational cost of existing solvers for these inverse problems. Our method takes advantage of the typically sparse representation of the signal, and at each iteration it adaptively creates and processes a hierarchy of lower-dimensional problems employing well-known iterated shrinkage methods. Analytical observations suggest, and numerical results confirm, that this new approach may significantly enhance the performance of existing iterative shrinkage algorithms in cases where the matrix is given explicitly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Phase-Sensitive Approach to Filtering on the Sphere

    Publication Year: 2012 , Page(s): 6330 - 6339
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2016 KB) |  | HTML iconHTML  

    This paper examines filtering on a sphere, by first examining the roles of spherical harmonic magnitude and phase. We show that phase is more important than magnitude in determining the structure of a spherical function. We examine the properties of linear phase shifts in the spherical harmonic domain, which suggest a mechanism for constructing finite-impulse-response (FIR) filters. We show that those filters have desirable properties, such as being associative, mapping spherical functions to spherical functions, allowing directional filtering, and being defined by relatively simple equations. We provide examples of the filters for both spherical and manifold data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simultaneous Codeword Optimization (SimCO) for Dictionary Update and Learning

    Publication Year: 2012 , Page(s): 6340 - 6353
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5372 KB) |  | HTML iconHTML  

    We consider the data-driven dictionary learning problem. The goal is to seek an over-complete dictionary from which every training signal can be best approximated by a linear combination of only a few codewords. This task is often achieved by iteratively executing two operations: sparse coding and dictionary update. The focus of this paper is on the dictionary update step, where the dictionary is optimized with a given sparsity pattern. We propose a novel framework where an arbitrary set of codewords and the corresponding sparse coefficients are simultaneously updated, hence the term simultaneous codeword optimization (SimCO). The SimCO formulation not only generalizes benchmark mechanisms MOD and K-SVD, but also allows the discovery that singular points, rather than local minima, are the major bottleneck of dictionary update. To mitigate the problem caused by the singular points, regularized SimCO is proposed. First and second order optimization procedures are designed to solve regularized SimCO. Simulations show that regularization substantially improves the performance of dictionary learning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structure-Based Bayesian Sparse Reconstruction

    Publication Year: 2012 , Page(s): 6354 - 6367
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3289 KB) |  | HTML iconHTML  

    Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the rich structure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithm is very low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at high sparsity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finding Non-Overlapping Clusters for Generalized Inference Over Graphical Models

    Publication Year: 2012 , Page(s): 6368 - 6381
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2584 KB) |  | HTML iconHTML  

    Graphical models use graphs to compactly capture stochastic dependencies amongst a collection of random variables. Inference over graphical models corresponds to finding marginal probability distributions given joint probability distributions. In general, this is computationally intractable, which has led to a quest for finding efficient approximate inference algorithms. We propose a framework for generalized inference over graphical models that can be used as a wrapper for improving the estimates of approximate inference algorithms. Instead of applying an inference algorithm to the original graph, we apply the inference algorithm to a block-graph, defined as a graph in which the nodes are non-overlapping clusters of nodes from the original graph. This results in marginal estimates of a cluster of nodes, which we further marginalize to get the marginal estimates of each node. Our proposed block-graph construction algorithm is simple, efficient, and motivated by the observation that approximate inference is more accurate on graphs with longer cycles. We present extensive numerical simulations that illustrate our block-graph framework with a variety of inference algorithms (e.g., those in the libDAI software package). These simulations show the improvements provided by our framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DOA Estimation Using a Greedy Block Coordinate Descent Algorithm

    Publication Year: 2012 , Page(s): 6382 - 6394
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2767 KB) |  | HTML iconHTML  

    This paper presents a novel jointly sparse signal reconstruction algorithm for the DOA estimation problem, aiming to achieve faster convergence rate and better estimation accuracy compared to existing l2,1-norm minimization approaches. The proposed greedy block coordinate descent (GBCD) algorithm shares similarity with the standard block coordinate descent method for l2,1-norm minimization, but adopts a greedy block selection rule which gives preference to sparsity. Although greedy, the proposed algorithm is proved to also have global convergence in this paper. Through theoretical analysis we demonstrate its stability in the sense that all nonzero supports found by the proposed algorithm are the actual ones under certain conditions. Last, we move forward to propose a weighted form of the block selection rule based on the MUSIC prior. The refinement greatly improves the estimation accuracy especially when two point sources are closely spaced. Numerical experiments show that the proposed GBCD algorithm has several notable advantages over the existing DOA estimation methods, such as fast convergence rate, accurate reconstruction, and noise resistance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-Complexity Blind Equalization for OFDM Systems With General Constellations

    Publication Year: 2012 , Page(s): 6395 - 6407
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3895 KB) |  | HTML iconHTML  

    This paper proposes a low-complexity algorithm for blind equalization of data in orthogonal frequency division multiplexing (OFDM)-based wireless systems with general constellations. The proposed algorithm is able to recover the transmitted data even when the channel changes on a symbol-by-symbol basis, making it suitable for fast fading channels. The proposed algorithm does not require any statistical information about the channel and thus does not suffer from latency normally associated with blind methods. The paper demonstrates how to reduce the complexity of the algorithm, which becomes especially low at high signal-to-noise ratio (SNR). Specifically, it is shown that in the high SNR regime, the number of operations is of the order O(LN), where L is the cyclic prefix length and N is the total number of subcarriers. Simulation results confirm the favorable performance of the proposed algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantization via Empirical Divergence Maximization

    Publication Year: 2012 , Page(s): 6408 - 6420
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3710 KB) |  | HTML iconHTML  

    Empirical divergence maximization (EDM) refers to a recently proposed strategy for estimating f-divergences and likelihood ratio functions. This paper extends the idea to empirical vector quantization where one seeks to empirically derive quantization rules that maximize the Kullback-Leibler divergence between two statistical hypotheses. We analyze the estimator's error convergence rate leveraging Tsybakov's margin condition and show that rates as fast as n-1 are possible, where n equals the number of training samples. We also show that the Flynn and Gray algorithm can be used to efficiently compute EDM estimates and show that they can be efficiently and accurately represented by recursive dyadic partitions. The EDM formulation have several advantages. First, the formulation gives access to the tools and results of empirical process theory that quantify the estimator's error convergence rate. Second, the formulation provides a previously unknown derivation for the Flynn and Gray algorithm. Third, the flexibility it affords allows one to avoid a small-cell assumption common in other approaches. Finally, we illustrate the potential use of the method through an example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens