By Topic

Signal Processing, IEEE Transactions on

Issue 8  Part 1 • Date Aug. 2005

Filter Results

Displaying Results 1 - 25 of 33
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Universal decentralized detection in a bandwidth-constrained sensor network

    Page(s): 2617 - 2624
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    Consider the problem of decentralized detection with a distributed sensor network where the communication channels between sensors and the fusion center are bandwidth constrained. Previous approaches to this problem typically rely on quantization of either the sensor observations or the local likelihood ratios, with quantization levels optimally designed using the knowledge of noise distribution. In this paper, we assume that each sensor is restricted to send a 1-bit message to the fusion center and that the sensor noises are additive, zero mean, and spatially independent but otherwise unknown and with possibly different distributions across sensors. We construct a universal decentralized detector using a recently proposed isotropic decentralized estimation scheme , that requires only the knowledge of either the noise range or its second-order moment. We show that the error probability of this detector decays exponentially at a rate that is lower bounded either in terms of the noise range for bounded noise or the signal-to-noise ratio for noise with unbounded range. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint time-scale and TDOA estimation: analysis and fast approximation

    Page(s): 2625 - 2634
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    Relative motion (rm) between a signal source and a receiver causes a time scaling of the signal arriving at the receiver. When estimating the time-difference-of-arrival (TDOA) of a signal at two receivers, time scaling, when not properly accounted for, can introduce a bias that could dominate the estimation errors. Segmentization processing cannot reduce this bias. Following the derivation of the formulae for bias and mean-square errors of TDOA estimation under rm, this paper moves on to the joint estimation of TDOA and time scale. It proposes an iterative search for the maximization of the cross-ambiguity function (CAF), which is also the maximum likelihood function for additive Gaussian bandlimited white noise disturbance. In addition, a quadratic Lagrange interpolator is also proposed to obtain the initial parameter values for the iterative search, which can increase the chance of converging to the global minimum solution. It is necessary to time scale a digital sequence by a noninteger in the maximization process. For an N-point sequence, this operation, which first interpolates the samples by sinc functions and then resamples, is in the order O(N2). Noting that the magnitude of the sinc function decreases rapidly from its peak, this paper uses a fast approximation (FA) method that applies only five sinc coefficients for the interpolation, reducing the computation to O(N). Simulation results have corroborated that the maximization of the CAF does provide estimates that reach the Cramer-Rao lower bound and that the degradation in accuracy is negligible when FA is applied. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear adaptive blind whitening for MIMO channels

    Page(s): 2635 - 2647
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    A nonlinear adaptive whitening method is proposed for blind deconvolution of MIMO systems by whitening the received signals in both time and space, with a highly nonlinear function of the past output data. The whitened signals are ISI-free and can be viewed as outputs of a memoryless paraunitary mixing system. The convergence of the proposed recursive algorithm is proved. Numerical simulation shows that the whitening method proposed in the paper works well, even if the output signal is corrupted by additive noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ICA in signals with multiplicative noise

    Page(s): 2648 - 2657
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    Independent component analysis (ICA) has been shown in the last few years to be a very useful tool in blind separation of sources and feature extraction. However, at least in its simpler form, its utility is reduced to the case that the outputs are linear mixtures of independent sources. This excludes signals with multiplicative noise. In this paper, ICA is extended to this situation. In order to do this, the special structure that appears in this new model is first studied, and then, the multiplicative ICA method is designed such that it uses this structure to find the mixture of the sources in the noisy environment. The local and global convergence properties of the method are also studied and its performance compared with standard ICA methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Offline and online identification of hidden semi-Markov models

    Page(s): 2658 - 2663
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    We present a new signal model for hidden semi-Markov models (HSMMs). Instead of constant transition probabilities used in existing models, we use state-duration-dependant transition probabilities. We show that our modeling approach leads to easy and efficient implementation of parameter identification algorithms. Then, we present a variant of the EM algorithm and an adaptive algorithm for parameter identification of HSMMs in the offline and online cases, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recursive EM and SAGE-inspired algorithms with application to DOA estimation

    Page(s): 2664 - 2677
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB) |  | HTML iconHTML  

    This paper is concerned with recursive estimation using augmented data. We study two recursive procedures closely linked with the well-known expectation and maximization (EM) and space alternating generalized EM (SAGE) algorithms. Unlike iterative methods, the recursive EM and SAGE-inspired algorithms give a quick update on estimates given new data. Under mild conditions, estimates generated by these procedures are strongly consistent and asymptotically normally distributed. These mathematical properties are valid for a broad class of problems. When applied to direction of arrival (DOA) estimation, the recursive EM and SAGE-inspired algorithms lead to a very simple and fast implementation of the maximum-likelihood (ML) method. The most complicated computation in each recursion is inversion of the augmented information matrix. Through data augmentation, this matrix is diagonal and easy to invert. More importantly, there is no search in such recursive procedures. Consequently, the computational time is much less than that associated with existing numerical methods for finding ML estimates. This feature greatly increases the potential of the ML approach in real-time processing. Numerical experiments show that both algorithms provide good results with low computational cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust iterative fitting of multilinear models

    Page(s): 2678 - 2689
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    Parallel factor (PARAFAC) analysis is an extension of low-rank matrix decomposition to higher way arrays, also referred to as tensors. It decomposes a given array in a sum of multilinear terms, analogous to the familiar bilinear vector outer products that appear in matrix decomposition. PARAFAC analysis generalizes and unifies common array processing models, like joint diagonalization and ESPRIT; it has found numerous applications from blind multiuser detection and multidimensional harmonic retrieval, to clustering and nuclear magnetic resonance. The prevailing fitting algorithm in all these applications is based on (alternating) least squares, which is optimal for Gaussian noise. In many cases, however, measurement errors are far from being Gaussian. In this paper, we develop two iterative algorithms for the least absolute error fitting of general multilinear models. The first is based on efficient interior point methods for linear programming, employed in an alternating fashion. The second is based on a weighted median filtering iteration, which is particularly appealing from a simplicity viewpoint. Both are guaranteed to converge in terms of absolute error. Performance is illustrated by means of simulations, and compared to the pertinent Crame´r-Rao bounds (CRBs). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Entropy-based uncertainty measures for L2(Rn), ℓ2(Z), and ℓ2(Z/NZ) with a Hirschman optimal transform for ℓ2(Z/NZ)

    Page(s): 2690 - 2699
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB) |  | HTML iconHTML  

    The traditional Heisenberg-Weyl measure quantifies the joint localization, uncertainty, or concentration of a signal in the phase plane based on a product of energies expressed as signal variances in time and in frequency. In the image processing literature, the term compactness also has been used to refer to this same notion of joint localization, in the sense of a signal representation that is efficient simultaneously in time (or space) and frequency. In this paper, we consider Hirschman uncertainty principles based not on energies and variances directly but rather on entropies computed with respect to normalized energy densities in time and frequency. Unlike the Heisenberg-Weyl measure, this entropic Hirschman notion of joint uncertainty extends naturally from the case of infinitely supported continuous-time signals to the cases of both finitely and infinitely supported discrete-time signals. For the first time, we consider these three cases together and study them relative to one another. In the case of infinitely supported continuous-time signals, we find that, consistent with the energy-based Heisenberg principle, the optimal time-frequency concentration with respect to the Hirschman uncertainty principle is realized by translated and modulated Gaussian functions. In the two discrete cases, however, the entropy-based measure yields optimizers that may be generated by applying compositions of operators to the Kronecker delta. Study of the discrete cases yields two interesting results. First, in the finitely supported case, the Hirschman-optimal functions coincide with the so-called "picket fence" functions that are also optimal with respect to the joint time-frequency counting measure of Donoho and Stark. Second, the Hirschman optimal functions in the infinitely supported case can be reconciled with continuous-time Gaussians through a certain limiting process. While a different limiting process can be used to reconcile the finitely and infinitely supported discrete cases, there does not appear to be a straightforward limiting process that unifies all three cases: The optimizers from the finitely supported discrete case are decidedly non-Gaussian. We perform a very simple experiment that indicates that the Hirschman optimal transform (HOT) is superior to - the discrete Fourier transform (DFT) and discrete cosine transform (DCT) in terms of its ability to separate or resolve two limiting cases of localization in frequency, viz. pure tones and additive white noise. We believe that these differences arise from the use of entropy rather than energy as an optimality criterion and are intimately related to the apparent incongruence between the infinitely supported continuous-time case and the finitely supported discrete-time case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear transmit processing in MIMO communications systems

    Page(s): 2700 - 2712
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    We examine and compare the different types of linear transmit processing for multiple input, multiple output systems, where we assume that the receive filter is independent of the transmit filter contrary to the joint optimization of transmit and receive filters. We can identify three filter types similar to receive processing: the transmit matched filter, the transmit zero-forcing filter, and the transmit Wiener filter. We show that the transmit filters are based on similar optimizations as the respective receive filters with an additional constraint for the transmit power. Moreover, the transmit Wiener filter has similar convergence properties as the receive Wiener filter, i.e., it converges to the matched filter and the zero-forcing filter for low and high signal-to-noise ratio, respectively. We give closed-form solutions for all transmit filters and present the fundamental result that their mean-square errors are equal to the errors of the respective receive filters, if the information symbols and the additive noise are uncorrelated. However, our simulations reveal that the bit-error ratio results of the transmit filters differ from the results for the respective receive filters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rank-deficient robust Capon filter bank approach to complex spectral estimation

    Page(s): 2713 - 2726
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB) |  | HTML iconHTML  

    We consider nonparametric complex spectral estimation using an adaptive filtering-based approach where the finite-impulse response (FIR) filter bank is obtained via a rank-deficient robust Capon beamformer. We show that, by allowing the sample covariance matrix to be rank deficient, we can achieve much higher resolution than existing approaches, which is useful in many applications, including radar target detection and feature extraction. Numerical examples are provided to demonstrate the performance of the new approach as compared to the existing data-adaptive and data-independent FIR filtering-based spectral estimation methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of the deficient length LMS adaptive algorithm

    Page(s): 2727 - 2734
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB) |  | HTML iconHTML  

    In almost all analyzes of the least mean-square (LMS) finite impulse response (FIR) adaptive algorithm, it is assumed that the length of the adaptive filter is equal to that of the unknown system impulse response. However, in many practical situations, a deficient length adaptive filter, whose length is less than that of the unknown system, is employed, and analysis results for the sufficient length LMS algorithm are not necessarily applicable to the deficient length case. Therefore, there is an essential need to accurately quantify the behavior of the LMS algorithm for realistic situations where the length of the adaptive filter is deficient. In this paper, we present a performance analysis for the deficient length LMS adaptive algorithm for correlated Gaussian input data and using the common independence assumption. Exact expressions that completely characterize the transient and steady-state mean-square performances of the algorithm are developed, which lead to new insights into the statistical behavior of the deficient length LMS algorithm. Simulation experiments illustrate the accuracy of the theoretical results in predicting the convergence behavior of the algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A robust H2 filtering approach and its application to equalizer design for communication systems

    Page(s): 2735 - 2747
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB) |  | HTML iconHTML  

    In this paper, we aim at developing a robust H2 filtering approach to the design of robust equalizers for communication systems. The contents of this paper mainly include two parts. First, we present a robust H2 filter design method for general multi-input multioutput linear systems with norm-bounded uncertainties in the system matrices based on the linear matrix inequality technique. The characteristics of the equalization problem are taken into account in the filtering model considered here. The advantage of the proposed method is that we can find the optimal solution to robust H2 filtering problem at a reasonable computational burden. Second, we apply the above method to the design of robust equalizers. Two generic examples are studied, one for a single transmit and receive antenna system and another for a multiple transmit and receive antenna system. Both analytical and simulation results show that robust H2 equalizer outperforms the zero-forcing equalizer in both bit-error rate and mean-square errors in the whole simulated range of the signal-to-noise ratio (SNR) when the channel is perturbed from its nominal value, while for the nominal channel, the robust H2 equalizer performs better than the zero-forcing equalizer when the SNR is lower, but it performs worse than the zero-forcing equalizer when the SNR is higher. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capon algorithm mean-squared error threshold SNR prediction and probability of resolution

    Page(s): 2748 - 2764
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (936 KB) |  | HTML iconHTML  

    Below a specific threshold signal-to-noise ratio (SNR), the mean-squared error (MSE) performance of signal parameter estimates derived from the Capon algorithm degrades swiftly. Prediction of this threshold SNR point is of practical significance for robust system design and analysis. The exact pairwise error probabilities for the Capon (and Bartlett) algorithm, derived herein, are given by simple finite sums involving no numerical integration, include finite sample effects, and hold for an arbitrary colored data covariance. Via an adaptation of an interval error based method, these error probabilities, along with the local error MSE predictions of Vaidyanathan and Buckley, facilitate accurate prediction of the Capon threshold region MSE performance for an arbitrary number of well separated sources, circumventing the need for numerous Monte Carlo simulations. A large sample closed-form approximation for the Capon threshold SNR is provided for uniform linear arrays. A new, exact, two-point measure of the probability of resolution for the Capon algorithm, that includes the deleterious effects of signal model mismatch, is a serendipitous byproduct of this analysis that predicts the SNRs required for closely spaced sources to be mutually resolvable by the Capon algorithm. Last, a general strategy is provided for obtaining accurate MSE predictions that account for signal model mismatch. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extension of the matrix Bartlett's formula to the third and fourth order and to noisy linear models with application to parameter estimation

    Page(s): 2765 - 2776
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (680 KB) |  | HTML iconHTML  

    This paper focuses on the extension of the asymptotic covariance of the sample covariance (denoted Bartlett's formula) of linear processes to thirdand fourth-order sample cumulant and to noisy linear processes. Closed-form expressions of the asymptotic covariance and cross-covariance of the sample second-, third-, and fourth-order cumulants are derived in a relatively straightforward manner, thanks to a matrix polyspectral representation and a symbolic calculus akin to a high-level language. As an application of these extended formulae, we underscore the sensitivity of the asymptotic performance of estimated ARMA parameters by an arbitrary third- or fourth-order-based algorithm with respect to the signal-to-noise ratio, the spectra of the linear process, and the colored additive noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind identification of Volterra-Hammerstein systems

    Page(s): 2777 - 2787
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    This paper is concerned with the blind identification of Volterra-Hammerstein systems. Two identification scenarios are covered. The first scenario assumes that, although the input is not available, the statistics of the input are a priori known. This case appears in communication applications where the input statistics of the transmitter are known to the receiver. The second scenario assumes that the input statistics are unknown. In the case of known input statistics, the input is stationary higher order white noise with arbitrary probability density function. Under the scenario of unknown input statistics, the input is restricted to Gaussian white process. New cumulant-based identification methods are described for the above scenarios. The problem is converted into a linear multivariable form and the output cumulants are calculated using Kronecker products. First, initial conditions are determined by a linear system of equations. These correspond to the boundary values of the Volterra kernels. The remaining kernel coefficients can be determined under both identification schemes from a possibly overdetermined system of linear equations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sampling and reconstruction of signals with finite rate of innovation in the presence of noise

    Page(s): 2788 - 2805
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (904 KB) |  | HTML iconHTML  

    Recently, it was shown that it is possible to develop exact sampling schemes for a large class of parametric nonbandlimited signals, namely certain signals of finite rate of innovation. A common feature of such signals is that they have a finite number of degrees of freedom per unit of time and can be reconstructed from a finite number of uniform samples. In order to prove sampling theorems, Vetterli et al. considered the case of deterministic, noiseless signals and developed algebraic methods that lead to perfect reconstruction. However, when noise is present, many of those schemes can become ill-conditioned. In this paper, we revisit the problem of sampling and reconstruction of signals with finite rate of innovation and propose improved, more robust methods that have better numerical conditioning in the presence of noise and yield more accurate reconstruction. We analyze, in detail, a signal made up of a stream of Diracs and develop algorithmic tools that will be used as a basis in all constructions. While some of the techniques have been already encountered in the spectral estimation framework, we further explore preconditioning methods that lead to improved resolution performance in the case when the signal contains closely spaced components. For classes of periodic signals, such as piecewise polynomials and nonuniform splines, we propose novel algebraic approaches that solve the sampling problem in the Laplace domain, after appropriate windowing. Building on the results for periodic signals, we extend our analysis to finite-length signals and develop schemes based on a Gaussian kernel, which avoid the problem of ill-conditioning by proper weighting of the data matrix. Our methods use structured linear systems and robust algorithmic solutions, which we show through simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the sphere-decoding algorithm I. Expected complexity

    Page(s): 2806 - 2818
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NP-hard. In communications applications, however, the given vector is not arbitrary but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore, in this paper, rather than dwell on the worst-case complexity of the integer least-squares problem, we study its expected complexity, averaged over the noise and over the lattice. For the "sphere decoding" algorithm of Fincke and Pohst, we find a closed-form expression for the expected complexity, both for the infinite and finite lattice. It is demonstrated in the second part of this paper that, for a wide range of signal-to-noise ratios (SNRs) and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real time-a result with many practical implications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the sphere-decoding algorithm II. Generalizations, second-order statistics, and applications to communications

    Page(s): 2819 - 2834
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    In Part I, we found a closed-form expression for the expected complexity of the sphere-decoding algorithm, both for the infinite and finite lattice. We continue the discussion in this paper by generalizing the results to the complex version of the problem and using the expected complexity expressions to determine situations where sphere decoding is practically feasible. In particular, we consider applications of sphere decoding to detection in multiantenna systems. We show that, for a wide range of signal-to-noise ratios (SNRs), rates, and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real-time-a result with many practical implications. To provide complexity information beyond the mean, we derive a closed-form expression for the variance of the complexity of sphere-decoding algorithm in a finite lattice. Furthermore, we consider the expected complexity of sphere decoding for channels with memory, where the lattice-generating matrix has a special Toeplitz structure. Results indicate that the expected complexity in this case is, too, polynomial over a wide range of SNRs, rates, data blocks, and channel impulse response lengths. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bidirectional conversion between DCT coefficients of blocks and their subblocks

    Page(s): 2835 - 2841
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    In the context of the discrete Fourier transform (DFT) and the discrete cosine transform (DCT), we present efficient methods for bidirectional conversion between transform coefficients of a signal block (one- or two-dimensional) and transform coefficients of the signal's subblocks. The DFT case is used to exemplify the underlying theoretical issues while the DCT case is considered for its practical importance. For typical DCT block sizes, our algorithms result in a 20% savings in multiplications over the fastest existing methods and have no penalty in reordering operations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative detection for pretransformed OFDM by subcarrier reconstruction

    Page(s): 2842 - 2854
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (792 KB) |  | HTML iconHTML  

    In this paper, an iterative detection method for an uncoded pretransformed (PT) orthogonal frequency division multiplexing (OFDM) system where the channel is not known at the transmitter is proposed. The iterative detection starts with linear detection. The noiseless received signal at the weakest subcarrier (corresponding to the smallest channel amplitude) is estimated based on all the detected data symbols using a hard or soft decision. Then, the actual received signal at the weakest subcarrier is replaced by the estimated one. This process is referred to as reconstruction here. After reconstruction, linear detection is carried out again to generate the next set of symbol estimates. The whole process proceeds iteratively to reconstruct the received signal at the next-weakest subcarrier. The transform coefficients and the iterative process are designed to maximize the minimum signal-to-noise power ratio. Under the assumption that the previous detection is error free, it is shown analytically that the iterative method achieves a diversity advantage of i+1 in the ith iteration, thus providing an explanation of its superior performance. Due to the flexibility of the transform design, the analysis conducted is applicable for other common systems as well. Simulations in realistic channels are carried out, and the bit-error rate performance of the iterative detection is superior as compared to that of the conventional detectors for the PT-OFDM or OFDM system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A sequential Monte Carlo method for adaptive blind timing estimation and data detection

    Page(s): 2855 - 2865
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB) |  | HTML iconHTML  

    Accurate estimation of synchronization parameters is critical for reliable data detection in digital transmission. Although several techniques have been proposed in the literature for estimation of the reference parameters, i.e., timing, carrier phase, and carrier frequency offsets, they are based on either heuristic arguments or approximations, since optimal estimation is analytically intractable in most practical setups. In this paper, we introduce a new alternative approach for blind synchronization and data detection derived within the Bayesian framework and implemented via the sequential Monte Carlo (SMC) methodology. By considering an extended dynamic system where the reference parameters and the transmitted symbols are system-state variables, the proposed SMC technique guarantees asymptotically minimal symbol error rate when it is combined with adequate receiver architectures, both in open-loop and closed-loop configurations. The performance of the proposed technique is studied analytically, by deriving the posterior Cra´mer-Rao bound for timing estimation and through computer simulations that illustrate the overall performance of the resulting receivers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum BER linear transceivers for MIMO channels via primal decomposition

    Page(s): 2866 - 2882
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB) |  | HTML iconHTML  

    This paper considers the employment of linear transceivers for communication through multiple-input multiple-output (MIMO) channels with channel state information (CSI) at both sides of the link. The design of linear MIMO transceivers has been studied since the 1970s by optimizing simple measures of the quality of the system, such as the trace of the mean-square error matrix, subject to a power constraint. Recent results showed how to solve the problem in an optimal way for the family of Schur-concave and Schur-convex cost functions. In particular, when the constellations used on the different transmit dimensions are equal, the bit-error rate (BER) averaged over these dimensions happens to be a Schur-convex function, and therefore, it can be optimally solved. In a more general case, however, when different constellations are used, the average BER is not a Schur-convex function, and the optimal design in terms of minimum BER is an open problem. This paper solves the minimum BER problem with arbitrary constellations by first reformulating the problem in convex form and then proposing two solutions. One is a heuristic and suboptimal solution, which performs remarkably well in practice. The other one is the optimal solution obtained by decomposing the convex problem into several subproblems controlled by a master problem (a technique borrowed from optimization theory), for which extremely simple algorithms exist. Thus, the minimum BER problem can be optimally solved in practice with very simple algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear turbo equalization analysis via BER transfer and EXIT charts

    Page(s): 2883 - 2897
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB) |  | HTML iconHTML  

    In this paper, two analytical methods are presented to investigate the soft information evolution characteristics of a soft-input soft-output (SISO) linear equalizer and its application to the design of turbo equalization systems without the reliance on extensive simulation. Given the channel and a SISO equalization algorithm, one method explored is to analytically compute the bit-error rate (BER) of the SISO equalizer in two extreme cases (no and perfect a priori information) from which a BER transfer characteristic is estimated. The second approach is to compute the mutual information [a key parameter of the extrinsic information transfer (EXIT) chart] at the two end points of the EXIT function. Then, by modeling the SISO equalizer BER transfer and EXIT functions as linear, some of the behavior of linear turbo equalization, such as how the BER performance can be improved as iterations proceed, can be predicted. Further, soft information evolution characteristics of different linear SISO equalizers can be compared and the usefulness of iterative methods such as linear turbo equalization for a given channel can be determined. Compared with existing methods for generating EXIT functions, these predictive methods provide insight into the iterative behavior of linear turbo equalizers with substantial reduction in numerical complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Zhi-Quan (Tom) Luo
University of Minnesota