By Topic

Signal Processing, IEEE Transactions on

Issue 9 • Date Sept. 2005

Filter Results

Displaying Results 1 - 25 of 28
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applications of the Signal Space Separation Method

    Page(s): 3359 - 3372
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (896 KB) |  | HTML iconHTML  

    The reliability of biomagnetic measurements is traditionally challenged by external interference signals, movement artifacts, and comparison problems caused by different positions of the subjects or different sensor configurations. The Signal Space Separation method (SSS) idealizes magnetic multichannel signals by transforming them into device-independent idealized channels representing the measured data in uncorrelated form. The transformation has separate components for the biomagnetic and external interference signals, and thus, the biomagnetic signals can be reconstructed simply by leaving out the contribution of the external interference. The foundation of SSS is a basis spanning all multichannel signals of magnetic origin. It is based on Maxwell's equations and the geometry of the sensor array only, with the assumption that the sensors are located in a current free volume. SSS is demonstrated to provide suppression of external interference signals, standardization of different positions of the subject, standardization of different sensor configurations, compensation for distortions caused by movement of the subject (even a subject containing magnetic impurities), suppression of sporadic sensor artifacts, a tool for fine calibration of the device, extraction of biomagnetic DC fields, and an aid for realizing an active compensation system. Thus, SSS removes many limitations of traditional biomagnetic measurements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Regularized Spectral Matching for Blind Source Separation. Application to fMRI Imaging

    Page(s): 3373 - 3383
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB)  

    The main contribution of this paper is to present a Bayesian approach for solving the noisy instantaneous blind source separation problem based on second-order statistics of the time-varying spectrum. The success of the blind estimation relies on the nonstationarity of the second-order statistics and their intersource diversity. Choosing the time-frequency domain as the signal representation space and transforming the data by a short-time Fourier transform (STFT), our method presents a simple EM algorithm that can efficiently deal with the time-varying spectrum diversity of the sources. The estimation variance of the STFT is reduced by averaging across time-frequency subdomains. The algorithm is demonstrated on a standard functional resonance imaging (fMRI) experiment involving visual stimuli in a block design. Explicitly taking into account the noise in the model, the proposed algorithm has the advantage of extracting only relevant task-related components and considers the remaining components (artifacts) to be noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single Evoked Somatosensory MEG Responses Extracted by Time Delayed Decorrelation

    Page(s): 3384 - 3392
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1280 KB)  

    Measurable magnetoencephalographic responses of the cortex due to an electrical stimulus at the wrist start 20 ms after the stimulus. This early magnetic response is known as the N20m, which can be seen by averaging over hundreds of stimulation epochs. Applying Independent Component Analysis (ICA) based on time-delayed decorrelation to such data allows the extraction of the single responses starting 20 ms after the stimulus without the need for averaging. One of the independent components has a field pattern that is very similar to the N20m. Using this independent component, it is found that the response at 20 ms is stable over a measurement session lasting 4000 s and containing 12 000 stimulations, whereas later responses show highly significant changes over time. To suppress slower activity and noise in the data, a high pass of 55 Hz is applied to the data. One of the subsequently calculated independent components shows that the response at 20 ms is much clearer than before filtering. Analyzing the amplitude distribution of this response yields that 97% of the stimulations have a measurable response above base line level, whereas for conventional methods such as projection and notch filtering, only 91% of the responses are detectable. The high degree of measurable responses indicates the signal separation power of independent component analysis, and it supports the hypothesis that the early stages of sensory cortical processing can be described as a linear processing chain with small variability, at least from a macroscopic point of view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toeplitz-Based Iterative Image Reconstruction for MRI With Correction for Magnetic Field Inhomogeneity

    Page(s): 3393 - 3402
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB) |  | HTML iconHTML  

    In some types of magnetic resonance (MR) imaging, particularly functional brain scans, the conventional Fourier model for the measurements is inaccurate. Magnetic field inhomogeneities, which are caused by imperfect main fields and by magnetic susceptibility variations, induce distortions in images that are reconstructed by conventional Fourier methods. These artifacts hamper the use of functional MR imaging (fMRI) in brain regions near air/tissue interfaces. Recently, iterative methods that combine the conjugate gradient (CG) algorithm with nonuniform FFT (NUFFT) operations have been shown to provide considerably improved image quality relative to the conjugate-phase method. However, for non-Cartesian k-space trajectories, each CG-NUFFT iteration requires numerous k-space interpolations; these are operations that are computationally expensive and poorly suited to fast hardware implementations. This paper proposes a faster iterative approach to field-corrected MR image reconstruction based on the CG algorithm and certain Toeplitz matrices. This CG-Toeplitz approach requires k-space interpolations only for the initial iteration; thereafter, only fast Fourier transforms (FFTs) are required. Simulation results show that the proposed CG-Toeplitz approach produces equivalent image quality as the CG-NUFFT method with significantly reduced computation time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Local Linear Estimators for the Bioelectromagnetic Inverse Problem

    Page(s): 3403 - 3412
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB) |  | HTML iconHTML  

    Linear estimators have been used widely in the bioelectromagnetic inverse problem, but their properties and relationships have not been fully characterized. Here, we show that the most widely used linear estimators may be characterized by a choice of norms on signal space and on source space. These norms depend, in part, on assumptions about the signal space and source space covariances. We demonstrate that two estimator classes (standardized and weight vector normalized) yield unbiased estimators of source location for simple source models (including only the noise-free case) but biased estimators of source magnitude. In the presence of instrumental (white) noise, we show that the nonadaptive standardized estimator is a biased estimator of source location, while the adaptive weight vector normalized estimator remains unbiased. A third class (distortionless) is an unbiased estimator of source magnitude but a biased estimator of source location. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple Hypothesis Mapping of Functional MRI Data in Orthogonal and Complex Wavelet Domains

    Page(s): 3413 - 3426
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1112 KB)  

    We are interested in methods for multiple hypothesis testing that optimize power to refute the null hypothesis while controlling the false discovery rate (FDR). The wavelet transform of a spatial map of brain activation statistics can be tested in two stages to achieve this objective: First, a set of possible wavelet coefficients to test is reduced, and second, each hypothesis in the remaining subset is formally tested. We show that a Bayesian bivariate shrinkage operator (BaybiShrink) for the first step provides a powerful and expedient alternative to a subband adaptive chi-squared test or an enhanced FDR algorithm based on the generalized degrees of freedom. We also investigate the dual-tree complex wavelet transform (CWT) as an alternative basis to the orthogonal discrete wavelet transform (DWT). We design and validate a test for activation based on the magnitude of the complex wavelet coefficients and show that this confers improved specificity for mapping spatial signals. The methods are applied to simulated and experimental data, including a pharmacological magnetic resonance imaging (MRI) study. We conclude that using BaybiShrink to define a reduced set of complex wavelet coefficients, and testing the magnitude of each complex pair to control the FDR, represents a competitive solution for multiple hypothesis mapping in fMRI. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Wald test and Crame´r-Rao bound for misspecified models in electromagnetic source analysis

    Page(s): 3427 - 3435
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    By using signal processing techniques, an estimate of activity in the brain from the electro- or magneto-encephalogram (EEG or MEG) can be obtained. For a proper analysis, a test is required to indicate whether the model for brain activity fits. A problem in using such tests is that often, not all assumptions are satisfied, like the assumption of the number of shells in an EEG. In such a case, a test on the number of sources (model order) might still be of interest. A detailed analysis is presented of the Wald test for these cases. One of the advantages of the Wald test is that it can be used when not all assumptions are satisfied. Two different, previously suggested, Wald tests in electromagnetic source analysis (EMSA) are examined: a test on source amplitudes and a test on the closeness of source pairs. The Wald test is analytically studied in terms of alternative hypotheses that are close to the hypothesis (local alternatives). It is shown that the Wald test is asymptotically unbiased, that it has the correct level and power, which makes it appropriate to use in EMSA. An accurate estimate of the Crame´r-Rao bound (CRB) is required for the use of the Wald test when not all assumptions are satisfied. The sandwich CRB is used for this purpose. It is defined for nonseparable least squares with constraints required for the Wald test on amplitudes. Simulations with EEG show that when the sensor positions are incorrect, or the number of shells is incorrect, or the conductivity parameter is incorrect, then the CRB and Wald test are still good, with a moderate number of trials. Additionally, the CRB and Wald test appear robust against an incorrect assumption on the noise covariance. A combination of incorrect sensor positions and noise covariance affects the possibility of detecting a source with small amplitude. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Penalized Partially Linear Models Using Sparse Representations With an Application to fMRI Time Series

    Page(s): 3436 - 3448
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB) |  | HTML iconHTML  

    In this paper, we consider modeling the nonparametric component in partially linear models (PLMs) using linear sparse representations, e.g., wavelet expansions. Two types of representations are investigated, namely, orthogonal bases (complete) and redundant overcomplete expansions. For bases, we introduce a regularized estimator of the nonparametric part. The important contribution here is that the nonparametric part can be parsimoniously estimated by choosing an appropriate penalty function for which the hard and soft thresholding estimators are special cases. This allows us to represent in an effective manner a broad class of signals, including stationary and/or nonstationary signals and avoids excessive bias in estimating the parametric component. We also give a fast estimation algorithm. The method is then generalized to handle the case of overcomplete representations. A large-scale simulation study is conducted to illustrate the finite sample properties of the estimator. The estimator is finally applied to real neurophysiological functional magnetic resonance imaging (MRI) data sets that are suspected to contain both smooth and transient drift features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simultaneous Estimation and Testing of Sources in Multiple MEG Data Sets

    Page(s): 3449 - 3460
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB) |  | HTML iconHTML  

    The proposed Extended Couple Dipole Model (ECDM) is a trilinear component model that can be used to analyze multiple, related MEG data sets simultaneously. Related MEG data sets are data sets that contain activity of the same sources or activity of sources that have proportional source amplitudes. The simultaneous model uses a set of common sources and a set of common source time functions (wave shapes) to model the measured data in each data set. The set of common sources contains all sources that are active in at least one of the data sets to be analyzed. The number of common spatial and temporal components is specified by the user. The model for each data set is a linear combination of these common spatial and temporal components. This linear combination is estimated in a coupling matrix. Unlike the Coupled Dipole Model, where the user selects certain entries of the coupling matrix to be zero, the entire coupling matrix is estimated in the ECDM. This yields a more objective and statistically transparent estimation method, of which the identifiability constraints do not depend on the user's chosen design as in the CDM. CramÈr–Rao Bounds are derived for the ECDM, and the significance of the estimated source activity is computed and illustrated by confidence regions around estimated source time functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessing the Relevance of fMRI-Based Prior in the EEG Inverse Problem: A Bayesian Model Comparison Approach

    Page(s): 3461 - 3472
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    Characterizing the cortical activity from electro- and magneto-encephalography (EEG/MEG) data requires solving an ill-posed inverse problem that does not admit a unique solution. As a consequence, the use of functional neuroimaging, for instance, functional Magnetic Resonance Imaging (fMRI), constitutes an appealing way of constraining the solution. However, the match between bioelectric and metabolic activities is desirable but not assured. Therefore, the introduction of spatial priors derived from other functional modalities in the EEG/MEG inverse problem should be considered with caution. In this paper, we propose a Bayesian characterization of the relevance of fMRI-derived prior information regarding the EEG/MEG data. This is done by quantifying the adequacy of this prior to the data, compared with that obtained using an noninformative prior instead. This quantitative comparison, using the so-called Bayes factor, allows us to decide whether the informative prior should (or not) be included in the inverse solution. We validate our approach using extensive simulations, where fMRI-derived priors are built as perturbed versions of the simulated EEG sources. Moreover, we show how this inference framework can be generalized to optimize the way we should incorporate the informative prior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Factor-Image Framework to Quantification of Brain Receptor Dynamic PET Studies

    Page(s): 3473 - 3487
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1696 KB) |  | HTML iconHTML  

    The positron emission tomography (PET) imaging technique enables the measurement of receptor distribution or neurotransmitter release in the living brain and the changes of the distribution with time and thus allows quantification of binding sites as well as the affinity of a radioligand. However, quantification of receptor binding studies obtained with PET is complicated by tissue heterogeneity in the sampling image elements (i.e., voxels, pixels). This effect is caused by a limited spatial resolution of the PET scanner. Spatial heterogeneity is often essential in understanding the underlying receptor binding process. Tracer kinetic modeling also often requires an intrusive collection of arterial blood samples. In this paper, we propose a likelihood-based framework in the voxel domain for quantitative imaging with or without the blood sampling of the input function. Radioligand kinetic parameters are estimated together with the input function. The parameters are initialized by a subspace-based algorithm and further refined by an iterative likelihood-based estimation procedure. The performance of the proposed scheme is examined by simulations. The results show that the proposed scheme provides reliable estimation of factor time-activity curves (TACs) and the underlying parametric images. A good match is noted between the result of the proposed approach and that of the Logan plot. Real brain PET data are also examined, and good performance is observed in determining the TACs and the underlying factor images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Detection-Estimation of Brain Activity in Functional MRI: A Multichannel Deconvolution Solution

    Page(s): 3488 - 3502
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB)  

    Analysis of functional magnetic resonance imaging (fMRI) data focuses essentially on two questions: first, a detection problem that studies which parts of the brain are activated by a given stimulus and, second, an estimation problem that investigates the temporal dynamic of the brain response during activations. Up to now, these questions have been addressed independently. However, the activated areas need to be known prior to the analysis of the temporal dynamic of the response. Similarly, a typical shape of the response has to be assumed a priori for detection purpose. This situation motivates the need for new methods in neuroimaging data analysis that are able to go beyond this unsatisfactory tradeoff. The present paper raises a novel detection-estimation approach to perform these two tasks simultaneously in region-based analysis. In the Bayesian framework, the detection of brain activity is achieved using a mixture of two Gaussian distributions as a prior model on the “neural” response levels, whereas the hemodynamic impulse response is constrained to be smooth enough in the time domain with a Gaussian prior. All parameters of interest, as well as hyperparameters, are estimated from the posterior distribution using Gibbs sampling and posterior mean estimates. Results obtained both on simulated and real fMRI data demonstrate first that our approach can segregate activated and nonactivated voxels in a given region of interest (ROI) and, second, that it can provide spatial activation maps without any assumption on the exact shape of the Hemodynamic Response Function (HRF), in contrast to standard model-based analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conditional Correlation as a Measure of Mediated Interactivity in fMRI and MEG/EEG

    Page(s): 3503 - 3516
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    Many measures have been proposed so far to extract brain functional interactivity from functional magnetic resonance imaging (fMRI) and magnetoencephalography/electroencephalography (MEG/EEG) data sets. Unfortunately, none has been able to provide a relevant, self-contained, and common definition of brain interaction. In this paper, we propose a first step in this direction. We first introduce a common terminology together with a cross-modal definition of interaction. In this setting, we investigate the commonalities shared by some measures of interaction proposed in the literature. We show that temporal correlation, nonlinear correlation, mutual information, generalized synchronization, phase synchronization, coherence, and phase locking value (PLV) actually measure the same quantity (namely correlation) when one is investigating linear interactions between independently and identically distributed Gaussian variables. We also demonstrate that these data-driven measures can only partly account for the interaction patterns that can be expressed by the effective connectivity of structural equation modeling (SEM) . To bridge this gap, we suggest the use of conditional correlation, which is shown to be related to mediated interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-Dimensional Adaptive Array Beamforming With Multiple Beam Constraints Using a Generalized Sidelobe Canceller

    Page(s): 3517 - 3529
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2784 KB) |  | HTML iconHTML  

    This paper deals with the problem of two-dimensional (2-D) adaptive array beamforming with multiple beam constraints (MBC) using a generalized sidelobe canceller (GSC). We present a method for the construction of signal blocking matrix required by the 2-D GSC. The resulting 2-D adaptive beamformer can provide almost the same performance as conventional 2-D adaptive beamformers based on a linearly constrained minimum variance (LCMV) criterion. The effectiveness of the proposed GSC is that the construction of the required signal blocking matrix requires only the computation of a few entries from analytical formulas. In comparison with conventional methods, the proposed technique gets rid of the computational complexity due to the eigendecomposition required for finding the 2-D signal blocking matrix. For dealing with the performance degradation due to coherent interference, we present a 2-D weighted spatial smoothing scheme to effectively alleviate the coherent jamming effect. Several simulation examples are provided for illustration and comparison. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixture-Based Extension of the AR Model and its Recursive Bayesian Identification

    Page(s): 3530 - 3542
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    An extension of the AutoRegressive (AR) model is studied, which allows transformations and distortions on the regressor to be handled. Many important signal processing problems are amenable to this Extended AR (i.e., EAR) model. It is shown that Bayesian identification and prediction of the EAR model can be performed recursively, in common with the AR model itself. The EAR model does, however, require that the transformation be known. When it is unknown, the associated transformation space is represented by a finite set of candidates. What follows is a Mixture-based EAR model, i.e., the MEAR model. An approximate identification algorithm for MEAR is developed, using a restricted Variational Bayes (VB) method. This restores the elegant recursive update of sufficient statistics. The MEAR model is applied to the robust identification of AR processes corrupted by outliers and burst noise, respectively, and to click removal for speech. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of the Number of Sources in Unbalanced Arrays via Information Theoretic Criteria

    Page(s): 3543 - 3553
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    Estimating the number of sources impinging on an array of sensors is a well-known and well-investigated problem. A common approach for solving this problem is to use an information theoretic criterion, such as Minimum Description Length (MDL) or the Akaike Information Criterion (AIC). The MDL estimator is known to be a consistent estimator, robust against deviations from the Gaussian assumption, and nonrobust against deviations from the point source and/or temporally or spatially white additive noise assumptions. Over the years, several alternative estimation algorithms have been proposed and tested. Usually, these algorithms are shown, using computer simulations, to have improved performance over the MDL estimator and to be robust against deviations from the assumed spatial model. Nevertheless, these robust algorithms have high computational complexity, requiring several multidimensional searches. In this paper, which is motivated by real-life problems, a systematic approach toward the problem of robust estimation of the number of sources using information theoretic criteria is taken. An MDL-type estimator that is robust against deviation from assumption of equal noise level across the array is studied. The consistency of this estimator, even when deviations from the equal noise level assumption occur, is proven. A novel low-complexity implementation method avoiding the need for multidimensional searches is presented as well, making this estimator a favorable choice for practical applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Posterior Distributions for Signals in Gaussian Noise With Unknown Covariance Matrix

    Page(s): 3554 - 3571
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB) |  | HTML iconHTML  

    A Bayesian approach to estimate parameters of signals embedded in complex Gaussian noise with unknown color is presented. The study specifically focuses on a Bayesian treatment of the unknown noise covariance matrix making up a nuisance parameter in such problems. By integrating out uncertainties regarding the noise color, an enhanced ability to estimate both the signal parameters as well as properties of the error is exploited. Several noninformative priors for the covariance matrix, such as the reference prior, the Jeffreys prior, and modifications to this, are considered. Some of the priors result in analytical solutions, whereas others demand numerical approximations. In the linear signal model, connections are made between the standard Adaptive Maximum Likelihood (AML) estimate and a Bayesian solution using the Jeffreys prior. With adjustments to the Jeffreys prior, correspondence to the regularized solution is also established. This in turn enables a formal treatment of the regularization parameter. Simulations indicate that significant improvements, compared to the AML estimator, can be obtained by considering both the derived regularized solutions as well as the one obtained using the reference prior. The simulations also indicate the possibility of enhancing the predictions of properties of the error as uncertainties in the noise color are acknowledged. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimum Multiflow Biorthogonal DMT With Unequal Subchannel Assignment

    Page(s): 3572 - 3582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    We consider the design of biorthogonal, as opposed to orthonormal, discrete multitone (DMT) systems supporting multiple services in a single antennae setting. The services may have differing quality of service (QoS) requirements, quantified by bit rate and symbol error rate specifications. Different users on the system can be potentially assigned different number of subchannels. Our goal is to minimize the transmitted power given the QoS specifications for the different users, subject to the knowledge of the channel and colored interference at the receiver input of the DMT system. We find an optimum bit loading scheme that distributes the bit rate transmitted across the various subchannels, where the precise subchannels are assigned to each user, and an optimum transceiver. Key conclusions are i) relaxing the orthonormality constraint yields no performance improvement; ii) the optimum transceiver is unaffected by changing service characteristics, and depends only on the channel and interference conditions; iii) the QoS requirements, the number of users, and the number of subchannels assigned to the different users only affect bitloading and subchannel assignment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MIMO OFDM Receivers for Systems With IQ Imbalances

    Page(s): 3583 - 3596
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB) |  | HTML iconHTML  

    Orthogonal frequency division multiplexing (OFDM) is a widely recognized modulation scheme for high data rate communications. However, the implementation of OFDM-based systems suffers from in-phase and quadrature-phase (IQ) imbalances in the front-end analog processing. Such imbalances are caused by the analog processing of the received radio frequency (RF) signal, and they cannot be efficiently or entirely eliminated in the analog domain. The resulting IQ distortion limits the achievable operating SNR at the receiver and, consequently, the achievable data rates. The issue of IQ imbalances is even more severe at higher SNR and higher carrier frequencies. In this paper, the effect of IQ imbalances on multi-input multioutput (MIMO) OFDM systems is studied, and a framework for combating such distortions through digital signal processing is developed. An input–output relation governing MIMO OFDM systems is derived. The framework is used to design receiver algorithms with compensation for IQ imbalances. It is shown that the complexity of the system at the receiver grows from dimension$(n_Rtimes n_T)$for ideal IQ branches to$(2n_Rtimes 2n_T)$in the presence of IQ imbalances. However, by exploiting the structure of space-time block codes along with the distortion models, one can obtain efficient receivers that are robust to IQ imbalances. Simulation results show significant improvement in the achievable BER of the proposed MIMO receivers for space-time block-coded OFDM systems in the presence of IQ imbalances. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-Variant Channel Estimation Using Discrete Prolate Spheroidal Sequences

    Page(s): 3597 - 3607
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB) |  | HTML iconHTML  

    We propose and analyze a low-complexity channel estimator for a multiuser multicarrier code division multiple access (MC-CDMA) downlink in a time-variant frequency-selective channel. MC-CDMA is based on orthogonal frequency division multiplexing (OFDM). The time-variant channel is estimated individually for every flat-fading subcarrier, assuming small intercarrier interference. The temporal variation of every subcarrier over the duration of a data block is upper bounded by the Doppler bandwidth determined by the maximum velocity of the users. Slepian showed that time-limited snapshots of bandlimited sequences span a low-dimensional subspace. This subspace is also spanned by discrete prolate spheroidal (DPS) sequences. We expand the time-variant subcarrier coefficients in terms of orthogonal DPS sequences we call Slepian basis expansion. This enables a time-variant channel description that avoids the frequency leakage effect of the Fourier basis expansion. The square bias of the Slepian basis expansion per subcarrier is three magnitudes smaller than the square bias of the Fourier basis expansion. We show simulation results for a fully loaded MC-CDMA downlink with classic linear minimum mean square error multiuser detection. The users are moving with 19.4 m/s. Using the Slepian basis expansion channel estimator and a pilot ratio of only 2%, we achieve a bit error rate performance as with perfect channel knowledge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Signal Processing Edics

    Page(s): 3608
    Save to Project icon | Request Permissions | PDF file iconPDF (28 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing Information for authors

    Page(s): 3609 - 3610
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Zhi-Quan (Tom) Luo
University of Minnesota