Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Signal Processing, IEEE Transactions on

Issue 11 • Date Nov. 2009

Filter Results

Displaying Results 1 - 25 of 46
  • Table of contents

    Publication Year: 2009 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (132 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Publication Year: 2009 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • On the Statistics of Spectral Amplitudes After Variance Reduction by Temporal Cepstrum Smoothing and Cepstral Nulling

    Publication Year: 2009 , Page(s): 4165 - 4174
    Cited by:  Papers (12)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1205 KB) |  | HTML iconHTML  

    In this paper, we derive the signal power bias that arises when spectral amplitudes are smoothed by reducing their variance in the cepstral domain (often referred to as cepstral smoothing) and develop a power bias compensation method. We show that if chi-distributed spectral amplitudes are smoothed in the cepstral domain, the resulting smoothed spectral amplitudes are also approximately chi-distributed but with more degrees of freedom and less signal power. The key finding for the proposed power bias compensation method is that the degrees of freedom of chi-distributed spectral amplitudes are directly related to their average cepstral variance. Furthermore, this work gives new insights into the statistics of the cepstral coefficients derived from chi-distributed spectral amplitudes using tapered spectral analysis windows. We derive explicit expressions for the variance and covariance of correlated chi-distributed spectral amplitudes and the resulting cepstral coefficients, parameterized by the degrees of freedom. The results in this work allow for a cepstral smoothing of spectral quantities without affecting their signal power. As we assume the parameterized chi-distribution for the spectral amplitudes, the results hold for Gaussian, super-Gaussian, and sub-Gaussian distributed complex spectral coefficients. The proposed bias compensation method is computationally inexpensive and shown to work very well for white and colored signals, as well as for rectangular and tapered spectral analysis windows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study of Two Error Functions to Approximate the Neyman–Pearson Detector Using Supervised Learning Machines

    Publication Year: 2009 , Page(s): 4175 - 4181
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (965 KB) |  | HTML iconHTML  

    A study of the possibility of approximating the Neyman-Pearson detector using supervised learning machines is presented. Two error functions are considered for training: the sum-of-squares error and the Minkowski error with R = 1. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition previously formulated. Some experiments about signal detection using neural networks are also presented to test the validity of the study. Theoretical and experimental results demonstrate, on one hand, that only the sum-of-squares error is suitable to approximate the Neyman-Pearson detector and, on the other hand, that the Minkowski error with R = 1 is suitable to approximate the minimum probability of error classifier. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaborative Cyclostationary Spectrum Sensing for Cognitive Radio Systems

    Publication Year: 2009 , Page(s): 4182 - 4195
    Cited by:  Papers (86)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1436 KB) |  | HTML iconHTML  

    This paper proposes an energy efficient collaborative cyclostationary spectrum sensing approach for cognitive radio systems. An existing statistical hypothesis test for the presence of cyclostationarity is extended to multiple cyclic frequencies and its asymptotic distributions are established. Collaborative test statistics are proposed for the fusion of local test statistics of the secondary users, and a censoring technique in which only informative test statistics are transmitted to the fusion center (FC) during the collaborative detection is further proposed for improving energy efficiency in mobile applications. Moreover, a technique for numerical approximation of the asymptotic distribution of the censored FC test statistic is proposed. The proposed tests are nonparametric in the sense that no assumptions on data or noise distributions are required. In addition, the tests allow dichotomizing between the desired signal and interference. Simulation experiments are provided that show the benefits of the proposed cyclostationary approach compared to energy detection, the importance of collaboration among spatially displaced secondary users for overcoming shadowing and fading effects, as well as the reliable performance of the proposed algorithms even in very low signal-to-noise ratio (SNR) regimes and under strict communication rate constraints for collaboration overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Effect of Shadow Fading on Wireless Geolocation in Mixed LoS/NLoS Environments

    Publication Year: 2009 , Page(s): 4196 - 4208
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1086 KB) |  | HTML iconHTML  

    This paper considers the wireless non-line-of-sight (NLoS) geolocation in mixed LoS/NLoS environments by using the information of time-of-arrival. We derive the Cramer-Rao bound (CRB) for a deterministic shadowing, the asymptotic CRB (ACRB) based on the statistical average of a random shadowing, a generalization of the modified CRB (MCRB) called a simplified Bayesian CRB (SBCRB), and the Bayesian CRB (BCRB) when the a priori knowledge of the shadowing probability density function is available. In the deterministic case, numerical examples show that for the effective bandwidth in the order of kHz, the CRB almost does not change with the additional length of the NLoS path except for a small interval of the length, in which the CRB changes dramatically. For the effective bandwidth in the order of MHz, the CRB decreases monotonously with the additional length of the NLoS path and finally converges to a constant as the additional length of the NLoS path approaches the infinity. In the random shadowing scenario, the shadowing exponent is modeled by sigmav = usigma, where u is a Gaussian random variable with zero mean and unit variance and sigma is another Gaussian random variable with mean musigma and standard deviation sigmasigma. When musigma is large, the ACRB considerably increases with sigmasigma, whereas the SBCRB gradually decreases with sigmasigma. In addition, the SBCRB can well approximate the BCRB. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parameter Estimation of Phase-Modulated Signals Using Bayesian Unwrapping

    Publication Year: 2009 , Page(s): 4209 - 4219
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (614 KB) |  | HTML iconHTML  

    Parametric estimation of phase-modulated signals (PMS) in additive white Gaussian noise is considered. The prohibitive computational expense of maximum likelihood estimation for this problem has led to the development of many suboptimal estimators which are relatively inaccurate and cannot operate at low signal-to-noise ratios (SNRs). In this paper, a novel technique based on a probabilistic unwrapping of the phase of the observations is developed. The method is capable of more accurate estimation and operates effectively at much lower SNRs than existing algorithms. This is demonstrated in Monte Carlo simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Radiological Source Detection and Localisation Using Bayesian Techniques

    Publication Year: 2009 , Page(s): 4220 - 4231
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (543 KB) |  | HTML iconHTML  

    The problem considered in this paper is detection and estimation of multiple radiation sources using a time series of radiation counts from a collection of sensors. A Bayesian framework is adopted. Source detection is approached as a model selection problem in which competing models are compared using partial Bayes factors. Given the number of sources, the posterior mean is the minimum mean square error estimator of the source parameters. Exact calculation of the partial Bayes factors and the posterior mean is not possible due to the presence of intractable integrals. Importance sampling using progressive correction is proposed as a computationally efficient method for approximating these integrals. Previously proposed algorithms have been restricted to one or two sources. A simulation analysis shows that the proposed methods can detect and accurately estimate the parameters of four sources with reasonable computational expense. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Bin-Occupancy Filter and Its Connection to the PHD Filters

    Publication Year: 2009 , Page(s): 4232 - 4246
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (764 KB) |  | HTML iconHTML  

    An algorithm that is capable not only of tracking multiple targets but also of ldquotrack managementrdquo-meaning that it does not need to know the number of targets as a user input-is of considerable interest. In this paper we devise a recursive track-managed filter via a quantized state-space (ldquobinrdquo) model. In the limit, as the discretization implied by the bins becomes as refined as possible (infinitesimal bins) we find that the filter equations are identical to Mahler's probability hypothesis density (PHD) filter, a novel track-managed filtering scheme that is attracting increasing attention. Thus, one contribution of this paper is an interpretation of, if not the PHD itself, at least what the PHD is doing. This does offer some intuitive appeal, but has some practical use as well: with this model it is possible to identify the PHD's ldquotarget-deathrdquo problem, and also the statistical inference structures of the PHD filters. To obviate the target death problem, PHD originator Mahler developed a new ldquocardinalizedrdquo version of PHD (CPHD). The second contribution of this paper is to extend the ldquobin-occupancyrdquo model such that the resulting recursive filter is identical to the cardinalized PHD filter. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Approximate Maximum-Likelihood Methods for Blind Identification: How to Cope With the Curse of Dimensionality

    Publication Year: 2009 , Page(s): 4247 - 4259
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    We discuss approximate maximum-likelihood methods for blind identification and deconvolution. These algorithms are based on particle approximation versions of the expectation-maximization (EM) algorithm. We consider three different methods which differ in the way the posterior distribution of the symbols is computed. The first algorithm is a particle approximation method of the fixed-interval smoothing. The two-filter smoothing and the novel joined-two-filter smoothing involve an additional backward-information filter. Because the state space is finite, it is furthermore possible at each step to consider all the offsprings of any given particle. It is then required to construct a novel particle swarm by selecting, among all these offsprings, particle positions and computing appropriate weights. We propose here a novel unbiased selection scheme, which minimizes the expected loss with respect to general distance functions. We compare these smoothing algorithms and selection schemes in a Monte Carlo experiment. We show a significant performance increase compared to the expectation maximization Viterbi algorithm (EMVA), a fixed-lag smoothing algorithm and the Block constant modulus algorithm (CMA). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The QS-Householder Sliding Window Bi-SVD Subspace Tracker

    Publication Year: 2009 , Page(s): 4260 - 4268
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    A fast algorithm for computing the sliding window bi-SVD subspace tracker is introduced. This algorithm produces, in each time step, a dominant rank-r SVD subspace approximant of an L timesN rectangular sliding window data matrix. The method is based on the QS (orthonormal-square) decomposition. It uses two row-Householder transformations for updating and one nonorthogonal Householder transformation for downdating in each time step. The resulting algorithm is long-term stable and shows excellent numerical and structural properties, as known from pure Householder-type algorithms. The dominant complexity is 4Lr +3Nr multiplications per time update, which is also the lower bound in dominant complexity for an algorithm of this kind. A completely self-contained algorithm summary is provided and a Fortran subroutine of the algorithm is available for download from http://webuser.hs-furtwangen.de/~strobach/qsh-bisvd.for. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Illumination Sensing in LED Lighting Systems Based on Frequency-Division Multiplexing

    Publication Year: 2009 , Page(s): 4269 - 4281
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (721 KB) |  | HTML iconHTML  

    Recently, light emitting diode (LED) based illumination systems have attracted considerable research interest. Such systems normally consist of a large number of LEDs. In order to facilitate the control of such high-complexity system, a novel signal processing application, namely illumination sensing, is thus studied. In this paper, the system concept and research challenges of illumination sensing are presented. Thereafter, we investigate a frequency-division multiplexing (FDM) scheme to distinguish the signals from different LEDs, such that we are able to estimate the illuminances of all the LEDs simultaneously. Moreover, a filter bank sensor structure is proposed to study the key properties of the FDM scheme. Conditions on the design of the filter response are imposed for the ideal case without the existence of any frequency inaccuracy, as well as for the case with frequency inaccuracies. The maximum number of LEDs that can be supported for each case is also derived. In particular, it is shown that, among all the other considered functions, the use of the triangular function is able to give a better tradeoff between the number of LEDs that can be supported and the allowable clock inaccuracies within a practical range. Moreover, through numerical investigations, we show that many tens of LEDs can be supported for the considered system parameters. Remark on the low-cost implementations of the proposed sensor structure is also provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generic Invertibility of Multidimensional FIR Filter Banks and MIMO Systems

    Publication Year: 2009 , Page(s): 4282 - 4291
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB) |  | HTML iconHTML  

    In this paper, we study the invertibility of M-variate Laurent polynomial N times P matrices. Such matrices represent multidimensional systems in various settings such as filter banks, multiple-input multiple-output systems, and multirate systems. Given an N times P Laurent polynomial matrix H(z1, ..., zM) of degree at most k, we want to find a P times N Laurent polynomial left inverse matrix G(z) of H(z) such that G(z)H(z) = J. We provide computable conditions to test the invertibility and propose algorithms to find a particular inverse. The main result of this paper is to prove that H(z) is generically invertible when N - P ges M; whereas when N - P < M, then H(z) is generically noninvertible. As a result, we propose an algorithm to find a particular inverse of a Laurent polynomial matrix that is faster than current algorithms known to us. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Bounds of Shift Variance in Two-Channel Multirate Filter Banks

    Publication Year: 2009 , Page(s): 4292 - 4303
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    Critically sampled multirate FIR filter banks exhibit periodically shift variant behavior caused by nonideal antialiasing filtering in the decimation stage. We assess their shift variance quantitatively by analysing changes in the output signal when the filter bank operator and shift operator are interchanged. We express these changes by a so-called commutator. We then derive a sharp upper bound for shift variance via the operator norm of the commutator, which is independent of the input signal. Its core is an eigensystem analysis carried out within a frequency domain formulation of the commutator, leading to a matrix norm which depends on frequency. This bound can be regarded as a worst case instance holding for all input signals. For two channel FIR filter banks with perfect reconstruction (PR), we show that the bound is predominantly determined by the structure of the filter bank rather than by the type of filters used. Moreover, the framework allows to identify the signals for which the upper bound is almost reached as so-called near maximizers of the frequency-dependent matrix norm. For unitary PR filter banks, these near maximizers are shown to be narrow-band signals. To complement this worst-case bound, we derive an additional bound on shift variance for input signals with given amplitude spectra, where we use wide-band model spectra instead of narrow-band signals. Like the operator norm, this additional bound is based on the above frequency-dependent matrix norm. We provide results for various critically sampled two-channel filter banks, such as quadrature mirror filters, PR conjugated quadrature filters, wavelets, and biorthogonal filters banks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonal and Biorthogonal \sqrt 3 -Refinement Wavelets for Hexagonal Data Processing

    Publication Year: 2009 , Page(s): 4304 - 4313
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB) |  | HTML iconHTML  

    The hexagonal lattice was proposed as an alternative method for image sampling. The hexagonal sampling has certain advantages over the conventionally used square sampling. Hence, the hexagonal lattice has been used in many areas. A hexagonal lattice allows radic3, dyadic and radic7 refinements, which makes it possible to use the multiresolution (multiscale) analysis method to process hexagonally sampled data. The radic3-refinement is the most appealing refinement for multiresolution data processing due to the fact that it has the slowest progression through scale, and hence, it provides more resolution levels from which one can choose. This fact is the main motivation for the study of radic3-refinement surface subdivision, and it is also the main reason for the recommendation to use the radic3-refinement for discrete global grid systems. However, there is little work on compactly supported radic3 -refinement wavelets. In this paper, we study the construction of compactly supported orthogonal and biorthogonal radic3-refinement wavelets. In particular, we present a block structure of orthogonal FIR filter banks with twofold symmetry and construct the associated orthogonal radic3-refinement wavelets. We study the sixfold axial symmetry of perfect reconstruction (biorthogonal) FIR filter banks. In addition, we obtain a block structure of sixfold symmetric radic3-refinement filter banks and construct the associated biorthogonal wavelets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Robust Chinese Remainder Theorem With Its Applications in Frequency Estimation From Undersampled Waveforms

    Publication Year: 2009 , Page(s): 4314 - 4322
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (825 KB) |  | HTML iconHTML  

    The Chinese remainder theorem (CRT) allows to reconstruct a large integer from its remainders modulo several moduli. In this paper, we propose a robust reconstruction algorithm called robust CRT when the remainders have errors. We show that, using the proposed robust CRT, the reconstruction error is upper bounded by the maximal remainder error range named remainder error bound, if the remainder error bound is less than one quarter of the greatest common divisor (gcd) of all the moduli. We then apply the robust CRT to estimate frequencies when the signal waveforms are undersampled multiple times. It shows that with the robust CRT, the sampling frequencies can be significantly reduced. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-Frequency Coherent Modulation Filtering of Nonstationary Signals

    Publication Year: 2009 , Page(s): 4323 - 4332
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (473 KB) |  | HTML iconHTML  

    Modulation filtering is a class of techniques for filtering slowly-varying modulation envelopes of frequency subbands of a signal, ideally without affecting the subband signal's temporal fine-structure. Coherent modulation filtering is a potentially more effective type of such techniques where, via an explicit product model, subband envelopes are determined from demodulation of the subband signal with a coherently detected subband carrier. In this paper we propose a coherent modulation filtering technique based on detecting the instantaneous frequency of a subband from its time-frequency representation. We devise theory to show that coherent modulation filtering imposes a new bandlimiting constraint on the product of the modulator and carrier as well as the ability to recover arbitrarily chosen envelopes and carriers from their modulation product. We then formally show that a carrier estimate based on the time-varying spectral center-of-gravity satisfies the bandlimiting condition. This bandwidth constraint leads to effective and artifact-free modulation filters, offering new approaches for potential signal modification. However, the spectral center-of-gravity does not, in general, satisfy the condition of arbitrary carrier recovery. Finally, the results from modulation-filtering a speech signal are then used to validate the theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stagewise Weak Gradient Pursuits

    Publication Year: 2009 , Page(s): 4333 - 4346
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (880 KB) |  | HTML iconHTML  

    Finding sparse solutions to underdetermined inverse problems is a fundamental challenge encountered in a wide range of signal processing applications, from signal acquisition to source separation. This paper looks at greedy algorithms that are applicable to very large problems. The main contribution is the development of a new selection strategy (called stagewise weak selection) that effectively selects several elements in each iteration. The new selection strategy is based on the realization that many classical proofs for recovery of sparse signals can be trivially extended to the new setting. What is more, simulation studies show the computational benefits and good performance of the approach. This strategy can be used in several greedy algorithms, and we argue for the use within the gradient pursuit framework in which selected coefficients are updated using a conjugate update direction. For this update, we present a fast implementation and novel convergence result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relaxed Conditions for Sparse Signal Recovery With General Concave Priors

    Publication Year: 2009 , Page(s): 4347 - 4354
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (935 KB) |  | HTML iconHTML  

    The emerging theory of compressive or compressed sensing challenges the convention of modern digital signal processing by establishing that exact signal reconstruction is possible for many problems where the sampling rate falls well below the Nyquist limit. Following the landmark works of Candes and Donoho on the performance of l1-minimization models for signal reconstruction, several authors demonstrated that certain nonconvex reconstruction models consistently outperform the convex l1-model in practice at very low sampling rates despite the fact that no global minimum can be theoretically guaranteed. Nevertheless, there has been little theoretical investigation into the performance of these nonconvex models. In this paper, a notion of weak signal recoverability is introduced and the performance of nonconvex reconstruction models employing general concave metric priors is investigated under this model. The sufficient conditions for establishing weak signal recoverability are shown to substantially relax as the prior functional is parameterized to more closely resemble the targeted l0-model, offering new insight into the empirical performance of this general class of reconstruction methods. Examples of relaxation trends are shown for several different prior models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Bayesian Endmember Extraction and Linear Unmixing for Hyperspectral Imagery

    Publication Year: 2009 , Page(s): 4355 - 4368
    Cited by:  Papers (71)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2025 KB) |  | HTML iconHTML  

    This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for nonnegativity and full-additivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decomposable Principal Component Analysis

    Publication Year: 2009 , Page(s): 4369 - 4377
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (395 KB) |  | HTML iconHTML  

    In this paper, we consider principal component analysis (PCA) in decomposable Gaussian graphical models. We exploit the prior information in these models in order to distribute PCA computation. For this purpose, we reformulate the PCA problem in the sparse inverse covariance (concentration) domain and address the global eigenvalue problem by solving a sequence of local eigenvalue problems in each of the cliques of the decomposable graph. We illustrate our methodology in the context of decentralized anomaly detection in the Abilene backbone network. Based on the topology of the network, we propose an approximate statistical graphical model and distribute the computation of PCA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Iterative Bayesian Algorithm for Sparse Component Analysis in Presence of Noise

    Publication Year: 2009 , Page(s): 4378 - 4390
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB) |  | HTML iconHTML  

    We present a Bayesian approach for sparse component analysis (SCA) in the noisy case. The algorithm is essentially a method for obtaining sufficiently sparse solutions of underdetermined systems of linear equations with additive Gaussian noise. In general, an underdetermined system of linear equations has infinitely many solutions. However, it has been shown that sufficiently sparse solutions can be uniquely identified. Our main objective is to find this unique solution. Our method is based on a novel estimation of source parameters and maximum a posteriori (MAP) estimation of sources. To tackle the great complexity of the MAP algorithm (when the number of sources and mixtures become large), we propose an iterative Bayesian algorithm (IBA). This IBA algorithm is based on the MAP estimation of sources, too, but optimized with a steepest-ascent method. The convergence analysis of the IBA algorithm and its convergence to true global maximum are also proved. Simulation results show that the performance achieved by the IBA algorithm is among the best, while its complexity is rather high in comparison to other algorithms. Simulation results also show the low sensitivity of the IBA algorithm to its simulation parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing Unimodular Sequence Sets With Good Correlations—Including an Application to MIMO Radar

    Publication Year: 2009 , Page(s): 4391 - 4405
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1200 KB) |  | HTML iconHTML  

    A multiple-input multiple-output (MIMO) radar system that transmits orthogonal waveforms via its antennas can achieve a greatly increased virtual aperture compared with its phased-array counterpart. This increased virtual aperture enables many of the MIMO radar advantages, including enhanced parameter identifiability and improved resolution. Practical radar requirements such as unit peak-to-average power ratio and range compression dictate that we use MIMO radar waveforms that have constant modulus and good auto- and cross-correlation properties. We present in this paper new computationally efficient cyclic algorithms for MIMO radar waveform synthesis. These algorithms can be used for the design of unimodular MIMO sequences that have very low auto- and cross-correlation sidelobes in a specified lag interval, and of very long sequences that could hardly be handled by other algorithms previously suggested in the literature. A number of examples are provided to demonstrate the performances of the new waveform synthesis algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperative Wireless Medium Access Exploiting Multi-Beam Adaptive Arrays and Relay Selection

    Publication Year: 2009 , Page(s): 4406 - 4417
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (471 KB) |  | HTML iconHTML  

    Cooperative transmission among wireless network nodes can be exploited to resolve collisions and thereby enhance the network throughput. Incorporation of multi-beam adaptive array (MBAA) at a base station/access point (destination) receiver has been shown to improve the network performance. In this paper, we propose an efficient cooperative wireless medium access scheme that exploits novel relay selection methods in a network equipped with MBAA at the destination receiver. Unlike existing techniques that require the estimation of angles-of-arrival (AoAs), the proposed scheme uses the spatial correlation among users for simpler but more effective collision detection and resolution. We present two useful relay selection methods based on channel gain and spatial correlation. It is shown that the joint use of an effective relay selection method and an MBAA in a wireless network can significantly improve the uplink throughput. The throughput of the proposed scheme and its upper bound are analytically derived. Numerical and simulation results have demonstrated significant performance enhancement achieved by the proposed cooperative wireless medium access scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Convex Analysis-Based Minimum-Volume Enclosing Simplex Algorithm for Hyperspectral Unmixing

    Publication Year: 2009 , Page(s): 4418 - 4432
    Cited by:  Papers (75)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3343 KB) |  | HTML iconHTML  

    Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing hyperspectral unmixing algorithms were developed under a commonly used assumption that pure pixels exist. However, the pure-pixel assumption may be seriously violated for highly mixed data. Based on intuitive grounds, Craig reported an unmixing criterion without requiring the pure-pixel assumption, which estimates the endmembers by vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, we incorporate convex analysis and Craig's criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing. A cyclic minimization algorithm for approximating the MVES problem is developed using linear programs (LPs), which can be practically implemented by readily available LP solvers. We also provide a non-heuristic guarantee of our MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers. Some Monte Carlo simulations and real data experiments are presented to demonstrate the efficacy of the proposed MVES algorithm over several existing hyperspectral unmixing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens