Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Signal Processing, IEEE Transactions on

Issue 6 • Date June 2009

Filter Results

Displaying Results 1 - 25 of 39
  • Table of contents

    Publication Year: 2009 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (130 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Publication Year: 2009 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Principal Curve Time Warping

    Publication Year: 2009 , Page(s): 2041 - 2049
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (952 KB) |  | HTML iconHTML  

    Time warping finds use in many fields of time series analysis, and it has been effectively implemented in many different application areas. Rather than focusing on a particular application area we approach the general problem definition, and employ principal curves, a powerful machine learning tool, to improve the noise robustness of existing time warping methods. The increasing noise level is the most important problem that leads to unnatural alignments. Therefore, we tested our approach in low signal-to-noise ratio (SNR) signals, and obtained satisfactory results. Moreover, for the signals denoised by principal curve projections we propose a differential equation-based time warping method, which has a comparable performance with lower computational complexity than the existing techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Energy-Efficient Routing for Signal Detection in Wireless Sensor Networks

    Publication Year: 2009 , Page(s): 2050 - 2063
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (792 KB) |  | HTML iconHTML  

    For many envisioned applications of wireless sensor networks (WSNs), the information processing involves dealing with distributed data in the context of accurate signal detection and energy-efficient routing, which have been active research topics for many years, respectively. In this paper, we relate these two aspects via joint optimization. Considering the scenario of using distributed radar-like sensors to detect the presence of an object through active sensing, we formulate the problem of energy- efficient routing for signal detection under the Neyman-Pearson criterion, apparently for the first time. The joint optimization of detection and routing is carried out in a fusion center which precomputes the routes as a function of the geographic location to be monitored. Accordingly, we propose three different routing metrics that aim at an appropriate tradeoff between the detection performance and the energy expenditure. In particular, each metric relates the detection performance explicitly in terms of probabilities of detection and false alarm, with the energy consumed in sensing and routing. The routing problems are formulated as combinatorial optimization programs, and we provide solutions drawing on operations research. We present extensive simulation results that demonstrate the energy and detection performance tradeoffs for each proposed routing metric. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • One- and Two-Stage Tunable Receivers

    Publication Year: 2009 , Page(s): 2064 - 2073
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (542 KB)  

    In this paper, we propose and assess a CFAR detector that can adjust its ldquodirectivityrdquo through a real scalar parameter. It relies on the usual assumption that a set of homogeneous training data is available and encompasses as special cases the well-known Kelly's GLRT and the recently introduced W-ABORT detector. More important, it can be tuned in order to control the level to which sidelobe signals are rejected. Such functionality is particularly important to contain the number of false alarms in presence of mismatched signals. We also consider a parametric detector which resorts to a diagonally loaded sample covariance matrix commonly adopted to take advantage of the presence of strong interferers. The performance assessment of such detector has shown that it can significantly outperform Kelly's GLRT in terms of prediction probabilities for matched signals and in terms of selectivity, but it is not strictly CFAR. We also propose to use the CFAR parametric detector as second stage of a two-stage tunable detector and show that such a two-stage detector can outperform already proposed tunable receivers in terms of selectivity. The analysis of the detectors is conducted assuming a homogeneous Gaussian environment; with reference to this scenario and to the CFAR detectors we derive analytical expressions for the probability of false alarm and the probability of detection for both matched and mismatched signals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Notes on the Tightness of the Hybrid CramÉr–Rao Lower Bound

    Publication Year: 2009 , Page(s): 2074 - 2084
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (433 KB) |  | HTML iconHTML  

    In this paper, we study the properties of the hybrid Cramer-Rao bound (HCRB). We first address the problem of estimating unknown deterministic parameters in the presence of nuisance random parameters. We specify a necessary and sufficient condition under which the HCRB of the nonrandom parameters is equal to the Cramer-Rao bound (CRB). In this case, the HCRB is asymptotically tight [in high signal-to-noise ratio (SNR) or in large sample scenarios], and, therefore, useful. This condition can be evaluated even when the CRB cannot be evaluated analytically. If this condition is not satisfied, we show that the HCRB on the nonrandom parameters is always looser than the CRB. We then address the problem in which the random parameters are not nuisance. In this case, both random and nonrandom parameters need to be estimated. We provide a necessary and sufficient condition for the HCRB to be tight. Furthermore, we show that if the HCRB is tight, it is obtained by the maximum likelihood/maximum a posteriori probability (ML/MAP) estimator, which is shown to be an unbiased estimator which estimates both random and nonrandom parameters simultaneously optimally (in the minimum mean-square-error sense). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Polynomial Approximation Algorithm for Real-Time Maximum-Likelihood Estimation

    Publication Year: 2009 , Page(s): 2085 - 2095
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (767 KB) |  | HTML iconHTML  

    Maximum-likelihood estimation subject to nonlinear measurement functions is generally performed through optimization algorithms when accuracy is required and enough processing time is available, or with recursive filters for real-time applications but at the expense of a loss of accuracy. In this paper, we propose a new estimator for parameter estimation based on a polynomial approximation of the measurement signal. The raw dataset is replaced by n + 1 independent polynomial samples (PS) for a smoothing polynomial of order n, resulting in a reduction of the computational burden. It is shown that the PSs must be sampled at some deterministic instants and an approximate formula for the variance of the PSs is also provided. Moreover, it is also proved and illustrated on three examples that the new estimator which processes the PSs is equivalent to the standard maximum-likelihood estimator based on the raw dataset, provided that the measurement function and its first derivatives can be approximated with a polynomial of order n. Since this algorithm proceeds from a compact representation of a measurement signal, it can find applications in real-time processing, power saving processing, or estimation based on compressed data, even if this latter field has not been investigated from a theoretical perspective. Its structure which is made up of several separate tasks is also adapted to distributed processing problems. Because the performance of the method is related to the polynomial approximation quality, the algorithm is well suited for smooth measurement functions like in trajectory estimation applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Class of Multilinear Functions for Polynomial Phase Signal Analysis

    Publication Year: 2009 , Page(s): 2096 - 2109
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1033 KB) |  | HTML iconHTML  

    This paper introduces a new class of multilinear functions which can be used for analyzing a signal with time-varying frequency. The new class subsumes a number of existing functions, including the higher order ambiguity functions (HAFs), the polynomial Wigner-Ville distributions (PWVDs), and the higher order phase (HP) functions. As well as establishing a link between these existing functions, the new class provides a formalism which allows for the creation of useful new multilinear functions. A number of new functions are derived from the class. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Context Quantization Approach to Universal Denoising

    Publication Year: 2009 , Page(s): 2110 - 2129
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6815 KB) |  | HTML iconHTML  

    We revisit the problem of denoising a discrete-time, continuous-amplitude signal corrupted by a known memoryless channel. By modifying our earlier approach to the problem, we obtain a scheme that is much more tractable than the original one and at the same time retains the universal optimality properties. The universality refers to the fact that the proposed denoiser asymptotically (with increasing block length of the data) achieves the performance of an optimum denoiser that has full knowledge of the distribution of a source generating the underlying clean sequence; the only restriction being that the distribution is stationary. The optimality, in a sense we will make precise, of the denoiser also holds in the case where the underlying clean sequence is unknown and deterministic and the only source of randomness is in the noise. The schemes involve a simple preprocessing step of quantizing the noisy symbols to generate quantized contexts. The quantized context value corresponding to each sequence component is then used to partition the unquantized symbols into subsequences. A universal symbol-by-symbol denoiser (for unquantized sequences) is then separately employed on each of the subsequences. We identify a rate at which the context length and quantization resolution should be increased so that the resulting scheme is universal. The proposed family of schemes is computationally attractive with an upper bound on complexity which is independent of the context length and the quantization resolution. Initial experimentation seems to indicate that these schemes are not only superior from a computational viewpoint, but also achieve better denoising in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unified Pascal Matrix for First-Order s{\hbox {\textendash }}z Domain Transformations

    Publication Year: 2009 , Page(s): 2130 - 2139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB) |  | HTML iconHTML  

    The so-called generalized Pascal matrix is used for transforming a continuous-time (CT) linear system (filter) into a discrete-time (DT) one. This paper derives an explicit expression for a new generalized Pascal matrix called unified Pascal matrix from a unified first-order S-to-Z transformation model and rigorously proves the inverses for various first-order s-to-z transformations. After deriving a recurrence formula for recursively generating the inner elements of the unified Pascal matrix from its boundary elements, we also show that the recurrence formula leads to computationally unstable solutions for high-order systems due to the so-called catastrophic cancellation in numerical computation, but the unstable problem can be solved through partitioning the whole unified Pascal matrix into several small matrices (submatrices) and then using the recurrence formula to compute the submatrices individually from their boundary elements. This operation almost retains the same computational complexity while guarantees numerically stable solutions. Moreover, an interesting property of the unified Pascal matrix is proved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perfect Reconstruction IIR Digital Filter Banks Supporting Nonexpansive Linear Signal Extensions

    Publication Year: 2009 , Page(s): 2140 - 2150
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1438 KB) |  | HTML iconHTML  

    In this paper, perfect reconstruction polyphase infinite impulse response (IIR) filter banks involving causal and anticausal inverses are examined for finite-length signals. First, a novel and efficient nonexpansive perfect reconstruction algorithm based on the state-space implementation is presented. Then the proposed method is extended to support linear signal extensions at the boundaries in a nonexpansive manner. The powerfulness of the proposed algorithm is demonstrated with the image compression results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel DCT-Based Real-Valued Discrete Gabor Transform and Its Fast Algorithms

    Publication Year: 2009 , Page(s): 2151 - 2164
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (907 KB) |  | HTML iconHTML  

    The oversampled Gabor transform is more effective than the critically sampled one in many applications. The biorthogonality relationship between the analysis window and the synthesis window of the Gabor transform represents the completeness condition. However, the traditional discrete cosine transform (DCT)-based real-valued discrete Gabor transform (RGDT) is available only in the critically sampled case and its biorthogonality relationship for the transform has not been unveiled. To bridge these important gaps, this paper proposes a novel DCT-based RDGT, which can be applied in both the critically sampled case and the oversampled case, and their biorthogonality relationships can be derived. The proposed DCT-based RDGT involves only real operations and can utilize fast DCT algorithms for computation, which facilitates computation and implementation by hardware or software as compared to that of the traditional complex-valued discrete Gabor transform. This paper also develops block time-recursive algorithms for the efficient and fast computation of the RDGT and its inverse transform. Unified parallel lattice structures for the implementation of these algorithms are presented. Computational complexity analysis and comparisons have shown that the proposed algorithms provide a more efficient and faster approach for discrete Gabor transforms as compared to those of the existing discrete Gabor transform algorithms. In addition, an application in the noise reduction of the nuclear magnetic resonance free induction decay signals is presented to show the efficiency of the proposed RDGT for time-frequency analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimized Least-Square Nonuniform Fast Fourier Transform

    Publication Year: 2009 , Page(s): 2165 - 2177
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1097 KB) |  | HTML iconHTML  

    The main focus of this paper is to derive a memory efficient approximation to the nonuniform Fourier transform of a support limited sequence. We show that the standard nonuniform fast Fourier transform (NUFFT) scheme is a shift invariant approximation of the exact Fourier transform. Based on the theory of shift-invariant representations, we derive an exact expression for the worst-case mean square approximation error. Using this metric, we evaluate the optimal scale-factors and the interpolator that provides the least approximation error. We also derive the upper-bound for the error component due to the lookup tablebased evaluation of the interpolator; we use this metric to ensure that this component is not the dominant one. Theoretical and experimental comparisons with standard NUFFT schemes clearly demonstrate the significant improvement in accuracy over conventional schemes, especially when the size of the uniform fast Fourier transform (FFT) is small. Since the memory requirement of the algorithm is dependent on the size of the uniform FFT, the proposed developments can lead to iterative signal reconstruction algorithms with significantly lower memory demands. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dictionary Learning for Sparse Approximations With the Majorization Method

    Publication Year: 2009 , Page(s): 2178 - 2191
    Cited by:  Papers (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1205 KB) |  | HTML iconHTML  

    In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation [ e.g., expectation-maximization (EM)] problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Switching Strategies for Sequential Decision Problems With Multiplicative Loss With Application to Portfolios

    Publication Year: 2009 , Page(s): 2192 - 2208
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1130 KB) |  | HTML iconHTML  

    A wide variety of problems in signal processing can be formulated such that decisions are made by sequentially taking convex combinations of vector-valued observations and these convex combinations are then multiplicatively compounded over time. A ldquouniversalrdquo approach to such problems might attempt to sequentially achieve the performance of the best fixed convex combination, as might be achievable noncausally, by observing all of the outcomes in advance. By permitting different piecewise-fixed strategies within contiguous regions of time, the best algorithm in this broader class would be able to switch between different fixed strategies to optimize performance to the changing behavior of each individual sequence of outcomes. Without knowledge of the data length or the number of switches necessary, the algorithms developed in this paper can achieve the performance of the best piecewise-fixed strategy that can choose both the partitioning of the sequence of outcomes in time as well as the best strategy within each time segment. We compete with an exponential number of such partitions, using only complexity linear in the data length and demonstrate that the regret with respect to the best such algorithm is at most O(ln(n)) in the exponent, where n is the data length. Finally, we extend these results to include finite collections of candidate algorithms, rather than convex combinations and further investigate the use of an arbitrary side-information sequence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Convergence of ICA Algorithms With Symmetric Orthogonalization

    Publication Year: 2009 , Page(s): 2209 - 2221
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    Independent component analysis (ICA) problem is often posed as the maximization/minimization of an objective/cost function under a unitary constraint, which presumes the prewhitening of the observed mixtures. The parallel adaptive algorithms corresponding to this optimization setting, where all the separators are jointly trained, are typically implemented by a gradient-based update of the separation matrix followed by the so-called symmetrical orthogonalization procedure to impose the unitary constraint. This article addresses the convergence analysis of such algorithms, which has been considered as a difficult task due to the complication caused by the minimum-(Frobenius or induced 2-norm) distance mapping step. We first provide a general characterization of the stationary points corresponding to these algorithms. Furthermore, we show that fixed point algorithms employing symmetrical orthogonalization are monotonically convergent for convex objective functions. We later generalize this convergence result for nonconvex objective functions. At the last part of the article, we concentrate on the kurtosis objective function as a special case. We provide a new set of critical points based on Householder reflection and we also provide the analysis for the minima/maxima/saddle-point classification of these critical points. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonorthogonal Joint Diagonalization by Combining Givens and Hyperbolic Rotations

    Publication Year: 2009 , Page(s): 2222 - 2231
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB) |  | HTML iconHTML  

    A new algorithm for computing the nonorthogonal joint diagonalization of a set of matrices is proposed for independent component analysis and blind source separation applications. This algorithm is an extension of the Jacobi-like algorithm first proposed in the joint approximate diagonalization of eigenmatrices (JADE) method for orthogonal joint diagonalization. The improvement consists mainly in computing a mixing matrix of determinant one and columns of equal norm instead of an orthogonal mixing matrix. This target matrix is constructed iteratively by successive multiplications of not only Givens rotations but also hyperbolic rotations and diagonal matrices. The algorithm performance, evaluated on synthetic data, compares favorably with existing methods in terms of speed of convergence and complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Quadratic Programming Approach to Blind Equalization and Signal Separation

    Publication Year: 2009 , Page(s): 2232 - 2244
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (973 KB) |  | HTML iconHTML  

    Blind equalization and signal separation are two well-established signal processing problems. In this paper, we present a quadratic programming algorithm for fast blind equalization and signal separation. By introducing a special non-mean-square error (MSE) objective function, we reformulate fractionally spaced blind equalization into an equivalent quadratic programming problem. Based on a clear geometric interpretation and a formal proof, we show that a perfect equalization solution is obtained at every local optimum of the quadratic program. Because blind source separation is, by nature and mathematically, a closely related problem, we also generalize the algorithm for blind signal separation. We show that by enforcing source orthogonalization through successive processing, the quadratic programming approach can be applied effectively. Moreover, the quadratic program is easily extendible to incorporate additional practical conditions, such as jamming suppression constraints. We also provide evidence of good performance through computer simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Arithmetic Coding for the Slepian–Wolf Problem

    Publication Year: 2009 , Page(s): 2245 - 2257
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (622 KB) |  | HTML iconHTML  

    Distributed source coding schemes are typically based on the use of channels codes as source codes. In this paper we propose a new paradigm, named ldquodistributed arithmetic coding,rdquo which extends arithmetic codes to the distributed case employing sequential decoding aided by the side information. In particular, we introduce a distributed binary arithmetic coder for the Slepian-Wolf coding problem, along with a joint decoder. The proposed scheme can be applied to two sources in both the asymmetric mode, wherein one source acts as side information, and the symmetric mode, wherein both sources are coded with ambiguity, at any combination of achievable rates. Distributed arithmetic coding provides several advantages over existing Slepian-Wolf coders, especially good performance at small block lengths, and the ability to incorporate arbitrary source models in the encoding process, e.g., context-based statistical models, in much the same way as a classical arithmetic coder. We have compared the performance of distributed arithmetic coding with turbo codes and low-density parity-check codes, and found that the proposed approach is very competitive. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • M -Description Lattice Vector Quantization: Index Assignment and Analysis

    Publication Year: 2009 , Page(s): 2258 - 2274
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1029 KB) |  | HTML iconHTML  

    In this paper, we investigate the design of symmetric entropy-constrained multiple description lattice vector quantization (MDLVQ), more specifically, MDLVQ index assignment. We consider a fine lattice containing clean similar sublattices with S -similarity. Due to the S -similarity of the sublattices, an M-fraction lattice can be used to regularly partition the fine lattice with smaller Voronoi cells than a sublattice does. With the partition, the MDLVQ index assignment design can be translated into a transportation problem in operations research. Both greedy and general algorithms are developed to pursue optimality of the index assignment. Under high-resolution assumption, we compare the proposed schemes with other relevant techniques in terms of optimality and complexity. Following our index assignment design, we also obtain an asymptotical close-form expression of k-description side distortion. Simulation results on coding different sources of Gaussian, speech and image are presented to validate the effectiveness of the proposed schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-Resolution Radar via Compressed Sensing

    Publication Year: 2009 , Page(s): 2275 - 2284
    Cited by:  Papers (197)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB) |  | HTML iconHTML  

    A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N times N grid. Assuming the number of targets K is small (i.e., K Lt N2), then we can transmit a sufficiently ldquoincoherentrdquo pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel-compressed sensing approach offers great potential for better resolution over classical radar. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systematic Construction of Linear Transform Based Full-Diversity, Rate-One Space–Time Frequency Codes

    Publication Year: 2009 , Page(s): 2285 - 2298
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1516 KB) |  | HTML iconHTML  

    In this paper, we generalize the existing rate-one space frequency (SF) and space-time frequency (STF) code constructions. The objective of this exercise is to provide a systematic design of full-diversity STF codes with high coding gain. Under this generalization, STF codes are formulated as linear transformations of data. Conditions on these linear transforms are then derived so that the resulting STF codes achieve full diversity and high coding gain with a moderate decoding complexity. Many of these conditions involve channel parameters like delay profile (DP) and temporal correlation. When these quantities are not available at the transmitter, design of codes that exploit full diversity on channels with arbitrary DP and temporal correlation is considered. Complete characterization of a class of such robust codes is provided and their bit error rate (BER) performance is evaluated. On the other hand, when channel DP and temporal correlation are available at the transmitter, linear transforms are optimized to maximize the coding gain of full-diversity STF codes. BER performance of such optimized codes is shown to be better than those of existing codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Algorithms to Track the PARAFAC Decomposition of a Third-Order Tensor

    Publication Year: 2009 , Page(s): 2299 - 2310
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (995 KB) |  | HTML iconHTML  

    The PARAFAC decomposition of a higher-order tensor is a powerful multilinear algebra tool that becomes more and more popular in a number of disciplines. Existing PARAFAC algorithms are computationally demanding and operate in batch mode - both serious drawbacks for on-line applications. When the data are serially acquired, or the underlying model changes with time, adaptive PARAFAC algorithms that can track the sought decomposition at low complexity would be highly desirable. This is a challenging task that has not been addressed in the literature, and the topic of this paper. Given an estimate of the PARAFAC decomposition of a tensor at instant t, we propose two adaptive algorithms to update the decomposition at instant t+1, the new tensor being obtained from the old one after appending a new slice in the 'time' dimension. The proposed algorithms can yield estimation performance that is very close to that obtained via repeated application of state-of-art batch algorithms, at orders of magnitude lower complexity. The effectiveness of the proposed algorithms is illustrated using a MIMO radar application (tracking of directions of arrival and directions of departure) as an example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bandwidth Efficient Cooperative TDOA Computation for Multicarrier Signals of Opportunity

    Publication Year: 2009 , Page(s): 2311 - 2322
    Cited by:  Papers (15)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1217 KB) |  | HTML iconHTML  

    Source localization, the problem of determining the physical location of an acoustic or wireless emitter, is commonly encountered in sensor networks which are attempting to locate and track an emitter. Similarly, in navigation systems that do not rely on the global positioning system (GPS), ldquosignals of opportunityrdquo (existing wireless infrastructure) can be used as ad hoc navigation beacons, and the goal is to determine their location relative to a receiver and thus deduce the receiver's position. These two research problems have a very similar mathematical structure. Specifically, in either the source localization or navigation problem, one common approach relies on time difference of arrival (TDOA) measurements to multiple sensors. In this paper, we investigate a bandwidth efficient method of TDOA computation when the signals of opportunity use multicarrier modulation. By exploiting the structure of the multicarrier transmission, much less information needs to be exchanged between sensors compared to the standard cross correlation approach. Analytic and simulation results quantify the performance of the proposed algorithm as a function of the signal-to-noise ratio (SNR) and the bandwidth between the sensors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectrum Sharing in Wireless Networks via QoS-Aware Secondary Multicast Beamforming

    Publication Year: 2009 , Page(s): 2323 - 2335
    Cited by:  Papers (67)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (714 KB) |  | HTML iconHTML  

    Secondary spectrum usage has the potential to considerably increase spectrum utilization. In this paper, quality-of-service (QoS)-aware spectrum underlay of a secondary multicast network is considered. A multiantenna secondary access point (AP) is used for multicast (common information) transmission to a number of secondary single-antenna receivers. The idea is that beamforming can be used to steer power towards the secondary receivers while limiting sidelobes that cause interference to primary receivers. Various optimal formulations of beamforming are proposed, motivated by different ldquocohabitationrdquo scenarios, including robust designs that are applicable with inaccurate or limited channel state information at the secondary AP. These formulations are NP-hard computational problems; yet it is shown how convex approximation-based multicast beamforming tools (originally developed without regard to primary interference constraints) can be adapted to work in a spectrum underlay context. Extensive simulation results demonstrate the effectiveness of the proposed approaches and provide insights on the tradeoffs between different design criteria. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens