By Topic

Signal Processing, IEEE Transactions on

Issue 3  Part 1 • Date March 2010

 This issue contains several parts.Go to:  Part 2 

Filter Results

Displaying Results 1 - 25 of 51
  • Table of contents

    Publication Year: 2010 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Statistical Detection of Congestion in Routers

    Publication Year: 2010 , Page(s): 957 - 968
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1857 KB) |  | HTML iconHTML  

    Detection of congestion plays a key role in numerous networking protocols, including those driving active queue management (AQM) methods used in congestion control in Internet routers. This paper exploits the rich theory of statistical detection theory to develop simple detection mechanisms that can further enhance current AQM methods. The detection of congestion is performed using a maximum-likelihood ratio test (MLRT), which reveals that the likelihood of congestion grows exponentially with the queue occupancy level. Performance evaluation of the likelihood detector shows it is robust to variations of the network parameters. The mathematical expression of the likelihood of congestion depends on the router's current dropping rate, its desired queue occupancy level, and the current queue occupancy. When incorporated into random early marking (REM) and random early detection (RED), the likelihood-ratio-based detection considerably improves their reaction time and reduces the variance of queue occupancy values. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frequency-Domain Correlation: An Asymptotically Optimum Approximation of Quadratic Likelihood Ratio Detectors

    Publication Year: 2010 , Page(s): 969 - 979
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (650 KB) |  | HTML iconHTML  

    An approximate implementation is formulated and analyzed for the detection of wide-sense stationary Gaussian stochastic signals in white Gaussian noise. For scalar processes, the approximate detector can be realized as the correlation between the periodogram of the observations and an appropriately selected spectral mask, and thus is termed the frequency-domain correlation detector. Through the asymptotic properties of Toeplitz matrices, it is shown that, as the length of the observation interval grows without bound, the frequency-domain correlation detector and the optimum quadratic detector achieve identical asymptotic performance, characterized by the decay rate of the miss probability under the Neyman-Pearson criterion. The frequency-domain correlation detector is further extended to the detection of vector-valued wide-sense stationary Gaussian stochastic signals, and the asymptotic optimality of its performance is established through the asymptotic properties of block Hermitian Toeplitz matrices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Statistical Analysis of Morse Wavelet Coherence

    Publication Year: 2010 , Page(s): 980 - 989
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB) |  | HTML iconHTML  

    Wavelet coherence computed from two time series has been widely applied in hypothesis testing situations, but has proven resistant to analytic study, with resort to simulations for statistical properties. As part of the null hypothesis being tested, such simulations invariably assume joint stationarity of the series. If estimated using multiple orthogonal Morse wavelets, wavelet coherence is in fact amenable to statistical study. Since the wavelets are complex-valued, we consider the case of wavelet coherence calculated from discrete-time complex-valued and stationary time series. Under Gaussianity, the Goodman distribution is, for large samples, appropriate for wavelet coherence. The true wavelet coherence value is identified in terms of its frequency domain equivalent. Theoretical results are illustrated and verified via simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind MIMO-AR System Identification and Source Separation With Finite-Alphabet

    Publication Year: 2010 , Page(s): 990 - 1000
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1160 KB) |  | HTML iconHTML  

    In this paper, a new method for system identification and blind source separation in a multiple-input multiple-output (MIMO) system is proposed. The MIMO channel is modeled by a multi-dimensional autoregressive (AR) system. The transmitted signals are assumed to take values from a finite alphabet, modeled by the Gaussian mixture model (GMM) with infinitesimal variances. The expectation-maximization (EM) algorithm for estimation of the MIMO-AR model parameters is derived. The performance of the proposed algorithm in terms of probability of error in signal detection and root mean squared error (RMSE) of the system parameters and system transfer function estimates is evaluated via simulations. It is shown that the obtained probability of error is very close to the probability of error of the optimal algorithm which assumes known channel state information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Estimation of a Random Parameter in a Gaussian Linear Model With Joint Eigenvalue and Elementwise Covariance Uncertainties

    Publication Year: 2010 , Page(s): 1001 - 1011
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (740 KB) |  | HTML iconHTML  

    We consider the estimation of a Gaussian random vector x observed through a linear transformation H and corrupted by additive Gaussian noise with a known covariance matrix, where the covariance matrix of x is known to lie in a given region of uncertainty that is described using bounds on the eigenvalues and on the elements of the covariance matrix. Recently, two criteria for minimax estimation called difference regret (DR) and ratio regret (RR) were proposed and their closed form solutions were presented assuming that the eigenvalues of the covariance matrix of x are known to lie in a given region of uncertainty, and assuming that the matrices H T Cw -1 H and C x are jointly diagonalizable, where C w and C x denote the covariance matrices of the additive noise and of x respectively. In this work we present a new criterion for the minimax estimation problem which we call the generalized difference regret (GDR), and derive a new minimax estimator which is based on the GDR criterion where the region of uncertainty is defined not only using upper and lower bounds on the eigenvalues of the parameter's covariance matrix, but also using upper and lower bounds on the individual elements of the covariance matrix itself. Furthermore, the new estimator does not require the assumption of joint diagonalizability and it can be obtained efficiently using semidefinite programming. We also show that when the joint diagonalizability assumption holds and when there are only eigenvalue uncertainties, then the new estimator is identical to the difference regret estimator. The experimental results show that we can obtain improved mean squared error (MSE) results compared to the MMSE, DR, and RR estimators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gaussian Multiresolution Models: Exploiting Sparse Markov and Covariance Structure

    Publication Year: 2010 , Page(s): 1012 - 1024
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1094 KB) |  | HTML iconHTML  

    In this paper, we consider the problem of learning Gaussian multiresolution (MR) models in which data are only available at the finest scale, and the coarser, hidden variables serve to capture long-distance dependencies. Tree-structured MR models have limited modeling capabilities, as variables at one scale are forced to be uncorrelated with each other conditioned on other scales. We propose a new class of Gaussian MR models in which variables at each scale have sparse conditional covariance structure conditioned on other scales. Our goal is to learn a tree-structured graphical model connecting variables across scales (which translates into sparsity in inverse covariance), while at the same time learning sparse structure for the conditional covariance (not its inverse) within each scale conditioned on other scales. This model leads to an efficient, new inference algorithm that is similar to multipole methods in computational physics. We demonstrate the modeling and inference advantages of our approach over methods that use MR tree models and single-scale approximation methods that do not use hidden variables. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Recursive Method for the Approximation of LTI Systems Using Subband Processing

    Publication Year: 2010 , Page(s): 1025 - 1034
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (642 KB) |  | HTML iconHTML  

    Using the subband technique, an LTI system can be implemented by the composition of an analysis filterbank, followed by a transfer matrix (subband model) and a synthesis filterbank. The advantage of this approach is that it offers a good tradeoff between latency and computational complexity. In this paper we propose an optimization method for approximating an LTI system using the subband technique. The proposed method includes optimal allocation of parameters from different FIR entries of the subband model, while keeping constant the total number of parameters, for a better utilization of the available coefficients. The optimization is done in a weighted least-squares sense considering either linear or logarithmic amplitude scale. Simulation results demonstrate the advantages of the proposed method when compared with classical implementation approaches using pole-zero transfer functions or segmented FFT algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Diffusion LMS Strategies for Distributed Estimation

    Publication Year: 2010 , Page(s): 1035 - 1048
    Cited by:  Papers (181)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1459 KB) |  | HTML iconHTML  

    We consider the problem of distributed estimation, where a set of nodes is required to collectively estimate some parameter of interest from noisy measurements. The problem is useful in several contexts including wireless and sensor networks, where scalability, robustness, and low power consumption are desirable features. Diffusion cooperation schemes have been shown to provide good performance, robustness to node and link failure, and are amenable to distributed implementations. In this work we focus on diffusion-based adaptive solutions of the LMS type. We motivate and propose new versions of the diffusion LMS algorithm that outperform previous solutions. We provide performance and convergence analysis of the proposed algorithms, together with simulation results comparing with existing techniques. We also discuss optimization schemes to design the diffusion LMS weights. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Filter Algorithms for Accelerated Discrete-Time Consensus

    Publication Year: 2010 , Page(s): 1049 - 1058
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (718 KB) |  | HTML iconHTML  

    In many distributed systems, the objective is to reach agreement on values acquired by the nodes in a network. A common approach to solve such problems is the iterative, weighted linear combination of those values to which each node has access. Methods to compute appropriate weights have been extensively studied, but the resulting iterative algorithms still require many iterations to provide a fairly good estimate of the consensus value. In this study we show that a good estimate of the consensus value can be obtained with few iterations of conventional consensus algorithms by filtering the output of each node with set-theoretic adaptive filters. We use the adaptive projected subgradient method to derive a set-theoretic filter requiring only local information available to each node and being robust to topology changes and erroneous information about the network. Numerical simulations show the good performance of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Empirical Mode Decomposition for Trivariate Signals

    Publication Year: 2010 , Page(s): 1059 - 1068
    Cited by:  Papers (34)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1367 KB) |  | HTML iconHTML  

    An extension of empirical mode decomposition (EMD) is proposed in order to make it suitable for operation on trivariate signals. Estimation of local mean envelope of the input signal, a critical step in EMD, is performed by taking projections along multiple directions in three-dimensional spaces using the rotation property of quaternions. The proposed algorithm thus extracts rotating components embedded within the signal and performs accurate time-frequency analysis, via the Hilbert-Huang transform. Simulations on synthetic trivariate point processes and real-world three-dimensional signals support the analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subspace-Based Rational Interpolation of Analytic Functions From Phase Data

    Publication Year: 2010 , Page(s): 1069 - 1081
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    In this paper, two simple subspace-based identification algorithms to identify stable linear-time-invariant systems from corrupted phase samples of frequency response function are developed. The first algorithm uses data sampled at nonuniformly spaced frequencies and is strongly consistent if corruptions are zero-mean additive random variables with a known covariance function. However, this algorithm is biased when corruptions are multiplicative, yet it exactly retrieves finite-dimensional systems from noise-free phase data using a finite amount of data. The second algorithm uses phase data sampled at equidistantly spaced frequencies and also has the same interpolation and strong consistency properties if corruptions are zero-mean additive random variables. The latter property holds also for the multiplicative noise model provided that some noise statistics are known a priori. Promising results are obtained when the algorithms are applied to simulated data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Impossibility Result for Linear Signal Processing Under Thresholding

    Publication Year: 2010 , Page(s): 1082 - 1094
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB) |  | HTML iconHTML  

    In this paper, we analyze the approximation of the outputs of linear time-invariant systems by sampling series that use only the samples of the input signal. The samples are disturbed by the threshold operator, which sets all samples with an absolute value smaller than some threshold to zero. We do the analysis for the space of Paley-Wiener signals with absolutely integrable Fourier transform and show for the Hilbert transform that the peak approximation error can grow arbitrarily large for some signals in this space when the threshold approaches zero. This behavior is counterintuitive because one would expect a better behavior if the threshold was decreased. Since we consider oversampling and all kernels from a certain meaningful set, the results are valid not only for one specific approximation process, but for a whole class of approximation processes. Furthermore, we give a game theoretic interpretation of the problem in the setting of a game against nature and show that nature has a universal strategy to win this game. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications

    Publication Year: 2010 , Page(s): 1095 - 1109
    Cited by:  Papers (8)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1073 KB) |  | HTML iconHTML  

    We study the distributed sampling and centralized reconstruction of two correlated signals, modeled as the input and output of an unknown sparse filtering operation. This is akin to a Slepian-Wolf setup, but in the sampling rather than the lossless compression case. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in what we call almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We derive achievability bounds on the number of samples needed for both scenarios. Our results show that, only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition to the above theoretical analysis, we propose an efficient and robust distributed sampling and reconstruction algorithm based on annihilating filters. We evaluate the performance of our method in one synthetic scenario, and two practical applications, including the distributed audio sampling in binaural hearing aids and the efficient estimation of room impulse responses. The numerical results confirm the effectiveness and robustness of the proposed algorithm in both synthetic and practical setups. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Computation of Frequency Warping Transforms

    Publication Year: 2010 , Page(s): 1110 - 1121
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (605 KB) |  | HTML iconHTML  

    In this paper, we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency (TF) plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped nonsmooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved Twiddle Access for Fast Fourier Transforms

    Publication Year: 2010 , Page(s): 1122 - 1130
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (679 KB) |  | HTML iconHTML  

    Optimizing the number of arithmetic operations required in fast Fourier transform (FFT) algorithms has been the focus of extensive research, but memory management is of comparable importance on modern processors. In this article, we investigate two known FFT algorithms, G and GT , that are similar to Cooley-Tukey decimation-in-time and decimation-in-frequency FFT algorithms but that give an asymptotic reduction in the number of twiddle factor loads required for depth-first recursions. The algorithms also allow for aggressive vectorization (even for non-power-of-2 orders) and easier optimization of trivial twiddle factor multiplies. We benchmark G and GT implementations with comparable Cooley-Tukey implementations on commodity hardware. In a comparison designed to isolate the effect of twiddle factor access optimization, these benchmarks show typical speedups ranging from 10% to 65%, depending on transform order, precision, and vectorization. A more heavily optimized implementation of GT yields substantial performance improvements over the widely used code FFTW for many transform orders. The twiddle factor access optimization technique can be generalized to other common FFT algorithms, including real-data FFTs, split-radix FFTs, and multidimensional FFTs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Subband Adaptive Iterative Shrinkage/Thresholding Algorithm

    Publication Year: 2010 , Page(s): 1131 - 1143
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1102 KB) |  | HTML iconHTML  

    We investigate a subband adaptive version of the popular iterative shrinkage/thresholding algorithm that takes different update steps and thresholds for each subband. In particular, we provide a condition that ensures convergence and discuss why making the algorithm subband adaptive accelerates the convergence. We also give an algorithm to select appropriate update steps and thresholds for when the distortion operator is linear and time invariant. The results in this paper may be regarded as extensions of the recent work by Vonesch and Unser. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multirate Synchronous Sampling of Sparse Multiband Signals

    Publication Year: 2010 , Page(s): 1144 - 1156
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (766 KB) |  | HTML iconHTML  

    Recent advances in electro-optical systems make them ideal for undersampling multiband signals with very high carrier frequencies. In this paper, we propose a new scheme for sampling and reconstructing of a multiband sparse signals that occupy a small part of a given broad frequency range under the constraint of a small number of sampling channels. The locations of the signal bands are not known a priori. The scheme, which we call synchronous multirate sampling (SMRS), entails gathering samples synchronously at few different rates whose sum is significantly lower than the Nyquist sampling rate. The signals are reconstructed by finding a solution of an underdetermined system of linear equations by applying a pursuit algorithm and assuming that the solution is composed of a minimum number of bands. The empirical reconstruction success rate is higher than obtained using previously published multicoset scheme when the number of sampling channels is small and the conditions for a perfect reconstruction in the multicoset scheme are not fulfilled. The practical sampling system which is simulated in our work consists of three sampling channels. Our simulation results show that a very high empirical success rate is obtained when the total sampling rate is five times higher than the total signal support of a complex signal with four bands. By comparison, a multicoset sampling scheme obtains a very high empirical success rate with a total sampling rate which is three times higher than the total signal support. However, the multicoset scheme requires 14 channels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convolution on the n -Sphere With Application to PDF Modeling

    Publication Year: 2010 , Page(s): 1157 - 1170
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1875 KB) |  | HTML iconHTML  

    In this paper, we derive an explicit form of the convolution theorem for functions on an n -sphere. Our motivation comes from the design of a probability density estimator for n -dimensional random vectors. We propose a probability density function (pdf) estimation method that uses the derived convolution result on Sn. Random samples are mapped onto the n -sphere and estimation is performed in the new domain by convolving the samples with the smoothing kernel density. The convolution is carried out in the spectral domain. Samples are mapped between the n-sphere and the n-dimensional Euclidean space by the generalized stereographic projection. We apply the proposed model to several synthetic and real-world data sets and discuss the results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Singular Value Decompositions and Low Rank Approximations of Tensors

    Publication Year: 2010 , Page(s): 1171 - 1182
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB) |  | HTML iconHTML  

    The singular value decomposition is among the most important tools in numerical analysis for solving a wide scope of approximation problems in signal processing, model reduction, system identification and data compression. Nevertheless, there is no straightforward generalization of the algebraic concepts underlying the classical singular values and singular value decompositions to multilinear functions. Motivated by the problem of lower rank approximations of tensors, this paper develops a notion of singular values for arbitrary multilinear mappings. We provide bounds on the error between a tensor and its optimal lower rank approximation. Conceptual algorithms are proposed to compute singular value decompositions of tensors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Nonlinear Channel Equalization and Soft LDPC Decoding With Gaussian Processes

    Publication Year: 2010 , Page(s): 1183 - 1192
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1319 KB) |  | HTML iconHTML  

    In this paper, we introduce a new approach for nonlinear equalization based on Gaussian processes for classification (GPC). We propose to measure the performance of this equalizer after a low-density parity-check channel decoder has detected the received sequence. Typically, most channel equalizers concentrate on reducing the bit error rate, instead of providing accurate posterior probability estimates. We show that the accuracy of these estimates is essential for optimal performance of the channel decoder and that the error rate output by the equalizer might be irrelevant to understand the performance of the overall communication receiver. In this sense, GPC is a Bayesian nonlinear classification tool that provides accurate posterior probability estimates with short training sequences. In the experimental section, we compare the proposed GPC-based equalizer with state-of-the-art solutions to illustrate its improved performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resolution Enhancement in \Sigma \Delta Learners for Superresolution Source Separation

    Publication Year: 2010 , Page(s): 1193 - 1204
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1316 KB) |  | HTML iconHTML  

    Many source separation algorithms fail to deliver robust performance when applied to signals recorded using high-density sensor arrays where the distance between sensor elements is much less than the wavelength of the signals. This can be attributed to limited dynamic range (determined by analog-to-digital conversion) of the sensor which is insufficient to overcome the artifacts due to large cross-channel redundancy, nonhomogeneous mixing, and high-dimensionality of the signal space. This paper proposes a novel framework that overcomes these limitations by integrating statistical learning directly with the signal measurement (analog-to-digital) process which enables high fidelity separation of linear instantaneous mixtures. At the core of the proposed approach is a min-max optimization of a regularized objective function that yields a sequence of quantized parameters which asymptotically tracks the statistics of the input signal. Experiments with synthetic and real recordings demonstrate significant and consistent performance improvements when the proposed approach is used as the analog-to-digital front-end to conventional source separation algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolution of Resource Reciprocation Strategies in P2P Networks

    Publication Year: 2010 , Page(s): 1205 - 1218
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1965 KB) |  | HTML iconHTML  

    In this paper, we consider the resource reciprocation among self-interested peers in peer-to-peer (P2P) networks, which is modeled as a stochastic game. Peers play the game by determining their optimal strategies for resource distributions using a Markov decision process (MDP) framework. The optimal strategies enable the peers to maximize their long-term utility. Unlike in conventional MDP frameworks, we consider heterogeneous peers that have different and limited ability to characterize their resource reciprocation with other peers. This is due to the large complexity requirements associated with their decision making processes. We analytically investigate these tradeoffs and show how to determine the optimal number of state descriptions, which maximizes each peer's average cumulative download rates given a limited time for computing the optimal strategies. We also investigate how the resource reciprocation evolves over time as peers adapt their reciprocation strategies by changing the number of state descriptions. Then, we study how resulting download rates affect their performance as well as that of the other peers with which they interact. Our simulation results quantify the tradeoffs between the number of state descriptions and the resulting utility. We also show that evolving resource reciprocation can improve the performance of peers which are simultaneously refining their state descriptions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-Line Prediction of Nonstationary Variable-Bit-Rate Video Traffic

    Publication Year: 2010 , Page(s): 1219 - 1237
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1644 KB) |  | HTML iconHTML  

    In this paper, we propose a model-based bandwidth prediction scheme for variable-bit-rate (VBR) video traffic with regular group of pictures (GOP) pattern. Multiplicative ARIMA (autoregressive integrated moving-average) process called GOP ARIMA (ARIMA for GOP) is used as a base stochastic model, which consists of two key ingredients: prediction and model validity check. For traffic prediction, we deploy a Kalman filter over GOP ARIMA model, and confidence interval analysis for validity determination. The GOP ARIMA mPodel explicitly models inter and intra-GOP frame size correlations and the Kalman filter-based prediction maintains ?state? across the prediction rounds. Synergy of the two successfully addresses a number of challenging issues, such as a unified framework for frame type dependent prediction, accurate prediction, and robustness against noise. With few exceptions, a single video session consists of several scenes whose bandwidth process may exhibit different stochastic nature, which hinders recursive adjustment of parameters in Kalman filter, because its stochastic model structure is fixed at its deployment. To effectively address this issue, the proposed prediction scheme harbors a statistical hypothesis test in the prediction framework. By formulating the confidence interval of a prediction in terms of Kalman filter components, it not only predicts the frame size but also determines validity of the stochastic model. Based upon the results of the model validity check, the proposed prediction scheme updates the structures of the underlying GOP ARIMA model. We perform a comprehensive performance study using publicly available MPEG-2 and MPEG-4 traces. We compare the prediction accuracy of four different prediction schemes. In all traces, the proposed model yields superior prediction accuracy than the other prediction schemes. We show that confidence interval analysis effectively detects the structural changes in the sample sequence and that pr- - operly updating the model results in more accurate prediction. However, model update requires a certain length of observation period, e.g., 60 frames (2 s). Due to this learning overhead, the advantage of model update becomes less significant when scene length is short. Through queueing simulation, we examine the effect of prediction accuracy over user perceivable QoS. The proposed bandwidth prediction scheme allocates less 50% of the queue(buffer) compared to the other bandwidth prediction schemes, but still yields better packet loss behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens