Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Signal Processing, IEEE Transactions on

Issue 7 • Date July 2010

Filter Results

Displaying Results 1 - 25 of 60
  • Table of contents

    Publication Year: 2010 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (135 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Announcing a New Peer Review Model for the IEEE Transactions on Signal Processing

    Publication Year: 2010 , Page(s): 3425
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • A Repeated Significance Test With Applications To Sequential Detection In Sensor Networks

    Publication Year: 2010 , Page(s): 3426 - 3435
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB) |  | HTML iconHTML  

    In this paper we introduce a randomly truncated sequential hypothesis test. Using the framework of a repeated significance test (RST), we study a sequential test with truncation time based on a random stopping time. Using the functional central limit theorem (FCLT) for a sequence of statistics, we derive a general result that can be employed in developing a repeated significance test with random sample size. We present effective methods for evaluating accurate approximations for the probability of type I error and the power function. Numerical results are presented to evaluate the accuracy of these approximations. We apply the proposed test to a decentralized sequential detection problem in sensor networks (SNs) with communication constraints. Finally, a sequential detection problem with measurements at random times is investigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online Adaptive Estimation of Sparse Signals: Where RLS Meets the \ell _1 -Norm

    Publication Year: 2010 , Page(s): 3436 - 3447
    Cited by:  Papers (65)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1432 KB) |  | HTML iconHTML  

    Using the 1-norm to regularize the least-squares criterion, the batch least-absolute shrinkage and selection operator (Lasso) has well-documented merits for estimating sparse signals of interest emerging in various applications where observations adhere to parsimonious linear regression models. To cope with high complexity, increasing memory requirements, and lack of tracking capability that batch Lasso estimators face when processing observations sequentially, the present paper develops a novel time-weighted Lasso (TWL) approach. Performance analysis reveals that TWL cannot estimate consistently the desired signal support without compromising rate of convergence. This motivates the development of a time- and norm-weighted Lasso (TNWL) scheme with 1-norm weights obtained from the recursive least-squares (RLS) algorithm. The resultant algorithm consistently estimates the support of sparse signals without reducing the convergence rate. To cope with sparsity-aware recursive real-time processing, novel adaptive algorithms are also developed to enable online coordinate descent solvers of TWL and TNWL that provably converge to the true sparse signal in the time-invariant case. Simulated tests compare competing alternatives and corroborate the performance of the novel algorithms in estimating time-invariant signals, and tracking time-varying signals under sparsity constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Representation and Generation of Non-Gaussian Wide-Sense Stationary Random Processes With Arbitrary PSDs and a Class of PDFs

    Publication Year: 2010 , Page(s): 3448 - 3458
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (806 KB) |  | HTML iconHTML  

    A new method for representing and generating realizations of a wide-sense stationary non-Gaussian random process is described. The representation allows one to independently specify the power spectral density and the first-order probability density function of the random process. The only proviso is that the probability density function must be symmetric and infinitely divisible. The method proposed models the sinusoidal component frequencies as random variables, a key departure from the usual representation a of wide-sense stationary random process by the spectral theorem. Ergodicity in the mean and autocorrelation is also proven, under certain conditions. An example is given to illustrate its application to the K distribution, which is important in many physical modeling problems in radar and sonar. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing Stationarity With Surrogates: A Time-Frequency Approach

    Publication Year: 2010 , Page(s): 3459 - 3470
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1290 KB) |  | HTML iconHTML  

    An operational framework is developed for testing stationarity relatively to an observation scale, in both stochastic and deterministic contexts. The proposed method is based on a comparison between global and local time-frequency features. The originality is to make use of a family of stationary surrogates for defining the null hypothesis of stationarity and to base on them two different statistical tests. The first one makes use of suitably chosen distances between local and global spectra, whereas the second one is implemented as a one-class classifier, the time- frequency features extracted from the surrogates being interpreted as a learning set for stationarity. The principle of the method and of its two variations is presented, and some results are shown on typical models of signals that can be thought of as stationary or nonstationary, depending on the observation scale used. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Closed-Form MMSE Estimation for Signal Denoising Under Sparse Representation Modeling Over a Unitary Dictionary

    Publication Year: 2010 , Page(s): 3471 - 3484
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1335 KB) |  | HTML iconHTML  

    This paper deals with the Bayesian signal denoising problem, assuming a prior based on a sparse representation modeling over a unitary dictionary. It is well known that the maximum a posteriori probability (MAP) estimator in such a case has a closed-form solution based on a simple shrinkage. The focus in this paper is on the better performing and less familiar minimum-mean-squared-error (MMSE) estimator. We show that this estimator also leads to a simple formula, in the form of a plain recursive expression for evaluating the contribution of every atom in the solution. An extension of the model to real-world signals is also offered, considering heteroscedastic nonzero entries in the representation, and allowing varying probabilities for the chosen atoms and the overall cardinality of the sparse representation. The MAP and MMSE estimators are redeveloped for this extended model, again resulting in closed-form simple algorithms. Finally, the superiority of the MMSE estimator is demonstrated both on synthetically generated signals and on real-world signals (image patches). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing Nonconvex Functions for Sparse Vector Reconstruction

    Publication Year: 2010 , Page(s): 3485 - 3496
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB) |  | HTML iconHTML  

    In this paper, we develop a novel methodology for minimizing a class of nonconvex (concave on the non-negative orthant) functions for solving an underdetermined linear system of equations As = x when the solution vector s is known a priori to be sparse. The proposed technique is based on locally replacing the original objective function by a quadratic convex function which is easily minimized. The resulting algorithm is iterative and is absolutely converging to a fixed point of the original objective function. For a certain selection of convex objective functions, the class of algorithms called iterative reweighted least squares (IRLS) is shown to be a special case of the proposed methodology. Thus, the proposed algorithms are a generalization and unification of the previous methods. In addition, we also propose a new class of algorithms with better convergence properties compared to the regular IRLS algorithms and, hence, can be considered as enhancements to these algorithms. Since the original objective functions are nonconvex, the proposed algorithm is susceptible to convergence to a local minimum. To alleviate this difficulty, we propose a random perturbation technique that enhances the performance of the proposed algorithm. The numerical results show that the proposed algorithms outperform some of the well-known algorithms that are usually utilized for solving the same problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonproduct Data-Dependent Partitions for Mutual Information Estimation: Strong Consistency and Applications

    Publication Year: 2010 , Page(s): 3497 - 3511
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (591 KB) |  | HTML iconHTML  

    A new framework for histogram-based mutual information estimation of probability distributions equipped with density functions in (Rd,B(Rd)) is presented in this work. A general histogram-based estimate is proposed, considering nonproduct data-dependent partitions, and sufficient conditions are stipulated to guarantee a strongly consistent estimate for mutual information. Two emblematic families of density-free strongly consistent estimates are derived from this result, one based on statistically equivalent blocks (the Gessaman's partition) and the other, on a tree-structured vector quantization scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Robust Estimation Method for ARMA Models

    Publication Year: 2010 , Page(s): 3512 - 3522
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (877 KB) |  | HTML iconHTML  

    The autoregressive moving-average (ARMA) modeling of time series is popular and used in many applications. In this paper, we introduce a new robust method to estimate the parameters of a Gaussian ARMA model contaminated by outliers. This method makes use of the median and is termed ratio-of-medians estimator (RME). The ratios of medians are used to estimate robustly the autocorrelation function and thus estimate the parameters. Its theoretical robustness is analyzed by the computation of robust measures such as maximum bias, breakdown point and influence function. The RME estimator is shown to be asymptotically normal and its asymptotic variance is computed under Gaussian autoregressive models of order p (p ≥ 1). The new method is evaluated and compared with other robust methods via simulations. Its effectiveness in terms of parameter estimation and forecasting is demonstrated on an example of the French daily electricity consumptions. The new approach improves the load forecasting quality for “normal days” and presents several interesting properties such as good robustness, fast execution, simplicity and easy online implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Direct Approach for the Frequency-Adaptive Feedforward Cancellation of Harmonic Disturbances

    Publication Year: 2010 , Page(s): 3523 - 3530
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (574 KB) |  | HTML iconHTML  

    This paper is concerned with the robust rejection of harmonic disturbances with unknown frequency and amplitude affecting uncertain linear system. The developed control scheme combines the properties of adaptive feedforward cancellation (AFC) techniques with the phase and frequency detection capabilities provided by a nonlinear frequency estimation algorithm. Under mild assumptions on the nominal model of the system to be controlled, the proposed scheme is proven to achieve the complete rejection of harmonic disturbances at the input or at the output of the system. A detailed stability analysis based on averaging provides useful informations on the effect of the tuning parameters over the convergence of the estimator. The effectiveness of the proposed approach is demonstrated by a two-time scale averaging analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frame-Theoretic Analysis of Robust Filter Bank Frames to Quantization and Erasures

    Publication Year: 2010 , Page(s): 3531 - 3544
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    This paper presents the theoretic analysis for the robustness of filter bank (FB) frames in infinite dimensional Hilbert space l2(BBZ) to quantization and erasures as well as studies the design of such robust frames, from the perspective of both frame and FB theory. First, a characterization of the eigenstructure for the frame operator and the induced Gram matrix of general FB frames is presented. The robustness of FB frames to erasures is investigated in detail, especially on the necessary and sufficient condition. Maximally robust frames are further analyzed by explicitly constructive methods. Moreover, the stability and even the possible FIR reconstruction of the subframe by the pseudoinverse are studied thoroughly. Next, we examine the optimal quantized FB frames. Introducing a novel notion named as frame energy, the universal optimality of tight FB frames to quantization is established, in contrast to the optimality of only equal norm tight frames shown in previous works. The added design freedom obtained by removing the equal norm constraint is explained and illustrated with examples. Finally, the effect of erasures on the quantized FB frames is studied by incorporating the probability of erasures, which shows the universal optimality of uniform tight FB frames for one erasure. The optimal FB frame is usually not of equal norm and actually its norm distribution follows the reverse waterfilling principle. An example of two-state erasure channel model is shown to further explain our result, followed by an analysis of the successive reconstruction which shows its potential application to frame-based multiple description coding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complex Gaussian Scale Mixtures of Complex Wavelet Coefficients

    Publication Year: 2010 , Page(s): 3545 - 3556
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (882 KB) |  | HTML iconHTML  

    In this paper, we propose the complex Gaussian scale mixture (CGSM) to model the complex wavelet coefficients as an extension of the Gaussian scale mixture (GSM), which is for real-valued random variables to the complex case. Along with some related propositions and miscellaneous results, we present the probability density functions of the magnitude and phase of the complex random variable. Specifically, we present the closed forms of the probability density function (pdf) of the magnitude for the case of complex generalized Gaussian distribution and the phase pdf for the general case. Subsequently, the pdf of the relative phase is derived. The CGSM is then applied to image denoising using the Bayes least-square estimator in several complex transform domains. The experimental results show that using the CGSM of complex wavelet coefficients visually improves the quality of denoised images from the real case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discrete Inverse S Transform With Least Square Error in Time-Frequency Filters

    Publication Year: 2010 , Page(s): 3557 - 3568
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1739 KB) |  | HTML iconHTML  

    The S transform is useful in time-frequency analysis. Many inverse S transform algorithms have been proposed with different filtering properties in the time-frequency spectrum. In this paper, the transformation matrices of the S transform and two novel least square inverse algorithms are proposed. The first one minimizes the global mean square error of the entire time-frequency spectrum, and the second one considers only the specific interesting time-frequency regions and is more flexible. The proposed inverse algorithms can provide more stable and better performance than the existing ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kernel-Induced Sampling Theorem

    Publication Year: 2010 , Page(s): 3569 - 3577
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (438 KB) |  | HTML iconHTML  

    A perfect reconstruction of functions in a reproducing kernel Hilbert space from a given set of sampling points is discussed. A necessary and sufficient condition for the corresponding reproducing kernel and the given set of sampling points to perfectly recover the functions is obtained in this paper. The key idea of our work is adopting the reproducing kernel Hilbert space corresponding to the Gramian matrix of the kernel and the given set of sampling points as the range space of a sampling operator and considering the orthogonal projector, defined via the range space, onto the closed linear subspace spanned by the kernel functions corresponding to the given sampling points. We also give an error analysis of a reconstructed function by incomplete sampling points. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sampling From a System-Theoretic Viewpoint: Part I—Concepts and Tools

    Publication Year: 2010 , Page(s): 3578 - 3590
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    This paper is first in a series of papers studying a system-theoretic approach to the problem of reconstructing an analog signal from its samples. The idea, borrowed from earlier treatments in the control literature, is to address the problem as a hybrid model-matching problem in which performance is measured by system norms. In this paper we present the paradigm and revise underlying technical tools, such as the lifting technique and some topics of the operator theory. This material facilitates a systematic and unified treatment of a wide range of sampling and reconstruction problems, recovering many hitherto considered different solutions and leading to new results. Some of these applications are discussed in the second part. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sampling From a System-Theoretic Viewpoint: Part II—Noncausal Solutions

    Publication Year: 2010 , Page(s): 3591 - 3606
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (787 KB) |  | HTML iconHTML  

    This paper puts to use concepts and tools introduced in Part I to address a wide spectrum of noncausal sampling and reconstruction problems. Particularly, we follow the system-theoretic paradigm by using systems as signal generators to account for available information and system norms (L2 and L) as performance measures. The proposed optimization-based approach recovers many known solutions, derived hitherto by different methods, as special cases under different assumptions about acquisition or reconstructing devices (e.g., polynomial and exponential cardinal splines for fixed samplers and the Sampling Theorem and its modifications in the case when both sampler and interpolator are design parameters). We also derive new results, such as versions of the Sampling Theorem for downsampling and reconstruction from noisy measurements, the continuous-time invariance of a wide class of optimal sampling-and-reconstruction circuits, etcetera. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Signal Recovery With Cost-Constrained Measurements

    Publication Year: 2010 , Page(s): 3607 - 3617
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (583 KB) |  | HTML iconHTML  

    We are concerned with the problem of optimally measuring an accessible signal under a total cost constraint, in order to estimate a signal which is not directly accessible. An important aspect of our formulation is the inclusion of a measurement device model where each device has a cost depending on the number of amplitude levels that the device can reliably distinguish. We also assume that there is a cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. This problem differs from standard estimation problems in that we are allowed to “design” the number and noise levels of the measurement devices subject to the cost constraint. Our main results are presented in the form of tradeoff curves between the estimation error and the cost budget. Although our primary motivation and numerical examples come from wave propagation problems, our formulation is also valid for other measurement problems with similar budget limitations where the observed variables are related to the unknown variables through a linear relation. We discuss the effects of signal-to-noise ratio, distance of propagation, and the degree of coherence (correlation) of the waves on these tradeoffs and the optimum cost allocation. Our conclusions not only yield practical strategies for designing optimal measurement systems under cost constraints, but also provide insights into measurement aspects of certain inverse problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uniform Discrete Curvelet Transform

    Publication Year: 2010 , Page(s): 3618 - 3634
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1731 KB) |  | HTML iconHTML  

    An implementation of the discrete curvelet transform is proposed in this work. The transform is based on and has the same order of complexity as the Fast Fourier Transform (FFT). The discrete curvelet functions are defined by a parameterized family of smooth windowed functions that satisfies two conditions: i) periodic; ii) their squares form a partition of unity. The transform is named the uniform discrete curvelet transform (UDCT) because the centers of the curvelet functions at each resolution are positioned on a uniform lattice. The forward and inverse transform form a tight and self-dual frame, in the sense that they are the exact transpose of each other. Generalization to M dimensional version of the UDCT is also presented. The novel discrete transform has several advantages over existing transforms, such as lower redundancy ratio, hierarchical data structure and ease of implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Dynamical Games Approach to Transmission-Rate Adaptation in Multimedia WLAN

    Publication Year: 2010 , Page(s): 3635 - 3646
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1028 KB) |  | HTML iconHTML  

    This paper considers the scheduling, rate adaptation, and buffer management in a multiuser wireless local-area network (WLAN) where each user transmits scalable video payload. Based on opportunistic scheduling, users access the available medium (channel) in a decentralized manner. The rate adaptation problem of the WLAN multimedia networks is then formulated as a general-sum switching control dynamic Markovian game by modelling the video states and block fading channel qualities of each user as a finite states Markovian chain. A value iteration algorithm is proposed to compute the Nash equilibrium policy of such a game and the convergence of the algorithm is also proved. We also give assumptions on the system so that the Nash equilibrium transmission policy of each user is a randomization of two pure policies with each policy nondecreasing on the buffer state occupancy. Based on this structural result, we use the policy gradient algorithm to compute the Nash equilibrium policy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection–Estimation of Very Close Emitters: Performance Breakdown, Ambiguity, and General Statistical Analysis of Maximum-Likelihood Estimation

    Publication Year: 2010 , Page(s): 3647 - 3660
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (818 KB) |  | HTML iconHTML  

    We reexamine the well-known problem of “threshold behavior” or “performance breakdown” in the detection-estimation of very closely spaced emitters. In this extreme regime, we analyze the performance for maximum-likelihood estimation (MLE) of directions-of-arrival (DOA) for two close Gaussian sources over the range of sample volumes and signal-to-noise ratios (SNRs) where the correct number of sources is reliably estimated by information-theoretic criteria (ITC), but where one of the DOA estimates is severely erroneous (“outlier”). We show that random matrix theory (RMT) applied to the evaluation of theoretical MLE performance gives a relatively simple and accurate analytical description of the threshold behavior of MLE and ITC. In particular, the introduced “single-cluster” criterion provides accurate “ambiguity bounds” for the outliers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noncoherent MIMO Radar for Location and Velocity Estimation: More Antennas Means Better Performance

    Publication Year: 2010 , Page(s): 3661 - 3680
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2542 KB) |  | HTML iconHTML  

    This paper presents an analysis of the joint estimation of target location and velocity using a multiple-input multiple-output (MIMO) radar employing noncoherent processing for a complex Gaussian extended target. A MIMO radar with M transmit and N receive antennas is considered. To provide insight, we focus on a simplified case first, assuming orthogonal waveforms, temporally and spatially white noise-plus-clutter, and independent reflection coefficients. Under these simplifying assumptions, the maximum-likelihood (ML) estimate is analyzed, and a theorem demonstrating the asymptotic consistency, large MN , of the ML estimate is provided. Numerical investigations, given later, indicate similar behavior for some reasonable cases violating the simplifying assumptions. In these initial investigations, we study unconstrained systems, in terms of complexity and energy, where each added transmit antenna employs a fixed energy so that the total transmitted energy is allowed to increase as we increase the number of transmit antennas. Following this, we also look at constrained systems, where the total system energy and complexity are fixed. To approximate systems of fixed complexity in an abstract way, we restrict the total number of antennas employed to be fixed. Here, we show numerical examples which indicate a preference for receive antennas, similar to MIMO communications, but where systems with multiple transmit antennas yield the smallest possible mean-square error (MSE). The joint Cramér-Rao bound (CRB) is calculated and the MSE of the ML estimate is analyzed. It is shown for some specific numerical examples that the signal-to-clutter-plus-noise ratio (SCNR) threshold, indicating the SCNRs above which the MSE of the ML estimate is reasonably close to the CRB, can be lowered by increasing MN. The noncoherent MIMO radar ambiguity function (AF) is developed in two different ways and illustrated by examples. It is shown for some specif- - ic examples that the size of the product MN controls the levels of the sidelobes of the AF. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative Adaptive Kronecker MIMO Radar Beamformer: Description and Convergence Analysis

    Publication Year: 2010 , Page(s): 3681 - 3691
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (972 KB) |  | HTML iconHTML  

    We introduce an iterative procedure for design of adaptive KL-variate linear beamformers that are structured as the Kronecker product of K-variate (transmit) and L-variate (receive) beamformers. We focus on MIMO radar applications for scenarios where only joint transmit and receive adaptive beamforming can efficiently mitigate multi-mode propagated backscatter interference. This is because the direction-of-departure (DoD) on one interference mode, and the direction-of-arrival (DoA) on the other, coincide with those of a target, respectively. We introduce a Markov model for the adaptive iterative routine, specify its convergence condition, and derive final (stable) signal-to-interference-plus-noise ratio (SINR) performance characteristics. Simulation results demonstrate high accuracy of the analytical derivations. In addition, we demonstrate, that for the considered class of multiple-input multiple-output (MIMO) radar interference scenarios, the diagonally loaded sample matrix inversion (SMI) algorithm provides additional performance improvement and convergence rate for this iterative adaptive Kronecker beamformer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Autoregressive Modeling of Temporal/Spectral Envelopes With Finite-Length Discrete Trigonometric Transforms

    Publication Year: 2010 , Page(s): 3692 - 3705
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB) |  | HTML iconHTML  

    The theory of autoregressive (AR) modeling, also known as linear prediction, has been established by the Fourier analysis of infinite discrete-time sequences or continuous-time signals. Nevertheless, for various finite-length discrete trigonometric transforms (DTTs), including the discrete cosine and sine transforms of different types, the theory is not well established. Several DTTs have been used in current audio coding, and the AR modeling method can be applied to reduce coding artifacts or exploit data redundancies. This paper systematically develops the AR modeling fundamentals of temporal and spectral envelopes for the sixteen members of the DTTs. This paper first considers the AR modeling in the generalized discrete Fourier transforms (GDFTs). Then, we derive the modeling to all the DTTs by introducing the analytic transforms which convert the real-valued vectors into complex-valued ones. Through the process, we build the compact matrix representations for the AR modeling of the DTTs in both time domain and DTT domain. These compact forms also illustrate that the AR modeling for the envelopes can be performed through the Hilbert envelope and the power envelope. These compact forms can be used to develop new coding technologies or examine the possible defects in the existing AR modeling methods for DTTs, We apply the forms to analyze the current temporal noise shaping (TNS) tool in MPEG-2/4 advanced audio coding (AAC). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens