By Topic

Information Theory, IEEE Transactions on

Issue 12 • Date Dec. 2008

Filter Results

Displaying Results 1 - 25 of 43
  • Table of contents

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Theory publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • Maxwell Construction: The Hidden Bridge Between Iterative and Maximum a Posteriori Decoding

    Page(s): 5277 - 5307
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1256 KB) |  | HTML iconHTML  

    There is a fundamental relationship between belief propagation and maximum a posteriori decoding. A decoding algorithm, which is called the Maxwell decoder, is introduced and provides a constructive description of this relationship. Both the algorithm itself and the analysis of the new decoder are reminiscent of the Maxwell construction in thermodynamics. This paper investigates in detail the case of transmission over the binary erasure channel, while the extension to general binary memoryless channels is discussed in a companion paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes

    Page(s): 5308 - 5331
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (750 KB) |  | HTML iconHTML  

    We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the tradeoff between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's local lemma (LLL) and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum-likelihood (ML) decoding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Weight Distributions of Two Classes of Cyclic Codes

    Page(s): 5332 - 5344
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    Let q=pm where p is an odd prime, mges2, and 1lesklesm-1. Let Tr be the trace mapping from Fq to Fp and zetap=e2pii/p be a primitive pth root of unity. In this paper, we determine the value distribution of the following exponential sums: SigmaxisinF qchi(alphaxp k +1+betax2) (alpha, betaisinFq) where chi(x)=zetap Tr(x) is the canonical additive character of Fq. As applications, we have the following. 1) We determine the weight distribution of the cyclic codes C1 and C2 over Fpt with parity-check polynomial h2(x)h3(x) and h1(x)h2(x)h3(x), respectively, where t is a divisor of d=gcd(m, k), and h1(x), h2(x) , and h3(x) are the minimal polynomials of pi-1, pi-2, and pi-(p k +1) over Fpt, respectively, for a primitive element pi of Fq. 2) We determine the correlation distribution between two m-sequences of period q-1. Moreover, we find a new class of p-ary bent functions. This paper extends the results in Feng and Luo (2008). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cyclic Codes and Sequences From Generalized Coulter–Matthews Function

    Page(s): 5345 - 5353
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    In this paper, we will study the exponential sum SigmaxisinFq x(alphax(pk+1)/2+betax) that is related to the generalized Coulter-Matthews function x(pk+1)/2 with k/gcd. As applications, we obtain the following: the correlation distribution of a p-ary m-sequence and a decimated m-sequence of degree pk+1/2; the weight distribution of the cyclic code whose dual has two zeros pi-1 and pi-((pk+1)/2). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sharp p -Divisibility of Weights in Abelian Codes Over {BBZ}/p^d{BBZ}

    Page(s): 5354 - 5380
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB) |  | HTML iconHTML  

    A theorem of McEliece on the p-divisibility of Hamming weights in cyclic codes over Fp is generalized to Abelian codes over Zopf/p dZopf. This work improves upon results of Helleseth-Kumar-Moreno-Shanbhag, Calderbank-Li-Poonen, Wilson, and Katz. These previous attempts are not sharp in general, i.e., do not report the full extent of the p -divisibility except in special cases, nor do they give accounts of the precise circumstances under which they do provide best possible results. This paper provides sharp results on p-divisibilities of Hamming weights and counts of any particular symbol for an arbitrary Abelian code over Zopf/p dZopf. It also presents sharp results on 2-divisibilities of Lee and Euclidean weights for Abelian codes over Zopf/4Zopf. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weil-Serre Type Bounds for Cyclic Codes

    Page(s): 5381 - 5395
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    We give a new method in order to obtain Weil-Serre type bounds on the minimum distance of arbitrary cyclic codes over Fpe of length coprime to p, where e ges 1 is an arbitrary integer. In an earlier paper we obtained Weil-Serre type bounds for such codes only when e =1 or e =2 using lengthy explicit factorizations, which seems hopeless to generalize. The new method avoids such explicit factorizations and it produces an effective alternative. Using our method we obtain Weil-Serre type bounds in various cases. By examples we show that our bounds perform very well against Bose-Chaudhuri-Hocquenghem (BCH) bound and they yield the exact minimum distance in some cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Methods for Linear Programming Decoding

    Page(s): 5396 - 5410
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (406 KB) |  | HTML iconHTML  

    Detectability of failures of linear programming (LP) decoding and the potential for improvement by adding new constraints motivate the use of an adaptive approach in selecting the constraints for the underlying LP problem. In this paper, we make a first step in studying this method, and show that by starting from a simple LP problem and adaptively adding the necessary constraints, the complexity of LP decoding can be significantly reduced. In particular, we observe that with adaptive LP decoding, the sizes of the LP problems that need to be solved become practically independent of the density of the parity-check matrix. We further show that adaptively adding extra constraints, such as constraints based on redundant parity checks, can provide large gains in the performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonal Codes for Robust Low-Cost Communication

    Page(s): 5411 - 5426
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    Orthogonal coding schemes, known to asymptotically achieve the capacity per unit cost (CPUC) for single-user ergodic memoryless channels with a zero-cost input symbol, are investigated for single-user compound memoryless channels, which exhibit uncertainties in their input-output statistical relationships. A minimax formulation is adopted to attain robustness. First, a class of achievable rates per unit cost (ARPUC) is derived, and its utility is demonstrated through several representative case studies. Second, when the uncertainty set of channel transition statistics satisfies a convexity property, optimization is performed over the class of ARPUC through utilizing results of minimax robustness. The resulting CPUC lower bound indicates the ultimate performance of the orthogonal coding scheme, and coincides with the CPUC under certain restrictive conditions. Finally, still under the convexity property, it is shown that the CPUC can generally be achieved, through utilizing a so-called mixed strategy in which an orthogonal code contains an appropriate composition of different nonzero-cost input symbols. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Analysis of the MIMO–SDMA Channel With Space–Time Orthogonal and Quasi-Orthogonal User Transmissions and Efficient Successive Cancellation Decoders

    Page(s): 5427 - 5446
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (668 KB) |  | HTML iconHTML  

    We consider space-time transceiver architectures for space-division multiple-access (SDMA) fading channels with simultaneous transmissions from multiple users. Each user has up to four transmit antennas and employs a space-time orthogonal or a quasi-orthogonal design as an inner code. At the multiple-antenna receiver, efficient successive group interference cancellation strategies based on zero-forcing or minimum mean-square error (MMSE) filtering are employed in some fixed or channel-dependent order. These strategies are efficient in the sense that they exploit the special structure of the inner codes to yield much higher diversity orders than would be otherwise possible, while at the same time preserving what we call the decoupling property of the constituent inner codes which enables the use of low-complexity outer encoders/decoders for each user. Motivated by the special structure of the effective channel matrix induced by the inner codes, we obtain several new distribution results on the QR and eigenvalue decompositions of certain structured random matrices. These results are the key to a comprehensive performance analysis of the proposed multiuser transceiver architectures including the characterization of diversity-multiplexing tradeoff (DMT) curves and exact per-user bit-error rates (BERs) without making simplifying assumptions about error propagation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bit-Interleaved Coded Modulation in the Wideband Regime

    Page(s): 5447 - 5455
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (238 KB) |  | HTML iconHTML  

    The wideband regime of bit-interleaved coded modulation (BICM) in Gaussian channels is studied. The Taylor expansion of the coded modulation capacity for generic signal constellations at low signal-to-noise ratio (SNR) is derived and used to determine the corresponding expansion for the BICM capacity. Simple formulas for the minimum energy per bit and the wideband slope are given. BICM is found to be suboptimal in the sense that its minimum energy per bit can be larger than the corresponding value for coded modulation schemes. The minimum energy per bit using standard Gray mapping on M-PAM or M 2 -QAM is given by a simple formula and shown to approach - 0.34 dB as M increases. Using the low SNR expansion, a general tradeoff between power and bandwidth in the wideband regime is used to show how a power loss can be traded off against a bandwidth gain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimum Receiver for a Realistic Transmit–Receive Diversity System in Correlated Fading

    Page(s): 5456 - 5468
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    A transmit-receive diversity system in correlated Rayleigh fading in which the receiver estimates the channel through pilot symbols, and feeds this information back to the transmitter through a feedback path, is considered. The imperfect channel state information (CSI) is used by the transmitter to obtain the transmit weight vector for data transmission. The optimum receiver in the maximum-likelihood (ML) sense obtained from the conditional distribution of the received signal vector, conditioned on the imperfect CSI and the transmit weight vector, is derived for the system. For the case of M-ary phase-shift keying (MPSK), an analytical expression for the conditional symbol error probability (SEP), conditioned on the channel estimate and the transmit weight vector, is obtained, with the transmit weight vector chosen to minimize this conditional SEP. For the receive-only and transmit-only correlation scenarios with ill-conditioned eigenvalues of the receive and transmit covariance matrices (that is, some of the eigenvalues are very small), we derive expressions for the diversity gain. Numerical results are presented to compare the performance of our receiver with that of a conventional receiver in case of exponentially correlated fading. These results show that the optimum receiver typically has about a 0.5-dB gain over a conventional receiver when the correlation coefficient exceeds 0.5 and the number of receive antennas is much larger than the number of transmit antennas. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Calculus for Log-Convex Interference Functions

    Page(s): 5469 - 5490
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (458 KB)  

    The behavior of certain interference-coupled multiuser systems can be modeled by means of logarithmically convex (log-convex) interference functions. In this paper, we show fundamental properties of this framework. A key observation is that any log-convex interference function can be expressed as an optimum over elementary log-convex interference functions. The results also contribute to a better understanding of certain quality-of-service (QoS) tradeoff regions, which can be expressed as sublevel sets of log-convex interference functions. We analyze the structure of the QoS region and provide conditions for the achievability of boundary points. The proposed framework of log-convex interference functions generalizes the classical linear interference model, which is closely connected with the theory of irreducible nonnegative matrices (Perron-Frobenius theory). We discuss some possible applications in robust communication, cooperative game theory, and max-min fairness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Downlink Beamforming With Cooperative Base Stations

    Page(s): 5491 - 5499
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (297 KB) |  | HTML iconHTML  

    In this paper, we consider multicell processing on the downlink of a cellular network to accomplish ldquomacrodiversityrdquo transmit beamforming. The particular downlink beamformer structure we consider allows a recasting of the downlink beamforming problem as a virtual linear mean square error (LMMSE) estimation problem. We exploit the structure of the channel and develop distributed beamforming algorithms using local message passing between neighboring base stations. For 1-D networks, we use the Kalman smoothing framework to obtain a forward-backward beamforming algorithm. We also propose a limited extent version of this algorithm that shows that the delay need not grow with the size of the network in practice. For 2-D cellular networks, we remodel the network as a factor graph and present a distributed beamforming algorithm based on the sum-product algorithm. Despite the presence of loops in the factor graph, the algorithm produces optimal results if convergence occurs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Geometric Interpretation of Fading in Wireless Networks: Theory and Applications

    Page(s): 5500 - 5510
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (442 KB) |  | HTML iconHTML  

    In wireless networks with random node distribution, the underlying point process model and the channel fading process are usually considered separately. A unified framework is introduced that permits the geometric characterization of fading by incorporating the fading process into the point process model. Concretely, assuming nodes are distributed in a stationary Poisson point process in Rd , the properties of the point processes that describe the path loss with fading are analyzed. The main applications are single-hop connectivity and broadcasting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Delay and Throughput Gains of Coding in Unreliable Networks

    Page(s): 5511 - 5524
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB) |  | HTML iconHTML  

    In an unreliable packet network setting, we study the performance gains of optimal transmission strategies in the presence and absence of coding capability at the transmitter, where performance is measured in delay and throughput. Although our results apply to a large class of coding strategies including maximum-distance separable (MDS) and Digital Fountain codes, we use random network codes in our discussions because these codes have a greater applicability for complex network topologies. To that end, after introducing a key setting in which performance analysis and comparison can be carried out, we provide closed-form as well as asymptotic expressions for the delay performance with and without network coding. We show that the network coding capability can lead to arbitrarily better delay performance as the system parameters scale when compared to traditional transmission strategies without coding. We further develop a joint scheduling and random-access scheme to extend our results to general wireless network topologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Delay-Limited Simulation

    Page(s): 5525 - 5533
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    Universal, delay-limited simulation of an unknown information source of a certain parametric family (e.g., the family of memoryless sources or Markov sources of a given order), given a training sequence from that source and a stream of independent and uniformly distributed bits, is considered. The goal of universal simulation is that the probability law of the generated sequence be identical to that of the training sequence, with minimum mutual information between the random processes generating both sequences. In the delay-limited setting, the simulation algorithm generates a random sequence sequentially, by delivering one symbol for each training symbol that is made available after a given initial delay, whereas the random bits are assumed to be available on demand. In this paper, the optimal universal delay-limited simulation scheme is characterized for broad parametric families, and the mutual information achieved by the proposed scheme is analyzed. The results are extended to a setting of variable delay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gaussian Interference Channel Capacity to Within One Bit

    Page(s): 5534 - 5562
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1214 KB) |  | HTML iconHTML  

    The capacity of the two-user Gaussian interference channel has been open for 30 years. The understanding on this problem has been limited. The best known achievable region is due to Han and Kobayashi but its characterization is very complicated. It is also not known how tight the existing outer bounds are. In this work, we show that the existing outer bounds can in fact be arbitrarily loose in some parameter ranges, and by deriving new outer bounds, we show that a very simple and explicit Han-Kobayashi type scheme can achieve to within a single bit per second per hertz (bit/s/Hz) of the capacity for all values of the channel parameters. We also show that the scheme is asymptotically optimal at certain high signal-to-noise ratio (SNR) regimes. Using our results, we provide a natural generalization of the point-to-point classical notion of degrees of freedom to interference-limited scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Lossless Compression of Erased Symbols

    Page(s): 5563 - 5574
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (589 KB) |  | HTML iconHTML  

    A source X goes through an erasure channel whose output is Z. The goal is to compress losslessly X when the compressor knows X and Z and the decompressor knows Z. We propose a universal algorithm based on context-tree weighting (CTW), parameterized by a memory-length parameter. We show that if the erasure channel is stationary and memoryless, and X is stationary and ergodic, then the proposed algorithm achieves a compression rate of H(X 0|X - l -1, Z l) bits per erasure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Multiterminal Source Coding Algorithms With Asymptotically Zero Feedback: Fixed Database Case

    Page(s): 5575 - 5590
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    Consider a source network in which a finite alphabet source X = {Xi}i=0 infin is to be encoded and transmitted, and another finite alphabet source Y = {Xi}i=0 infin correlated with X is available only to the decoder as side information. Traditionally, the channel between the encoder and decoder in the source network is assumed to be one-way. This, together with the fact that the encoder does not have access to Y, implies that the encoder has to know the achievable rates before encoding. In this paper, we consider universal source coding for a feedback source network in which the channel between the encoder and decoder is two-way and asymmetric. Assume that the encoder and decoder share a random database that is independent of both X and Y. A string matching-based (variable-rate) block coding algorithm with simple progressive encoding and joint typicality decoding is first proposed for the feedback source network. The simple progressive encoder does not need to know the achievable rates at the beginning of encoding. It is proven that for any (X, Y) in a large class of sources satisfying some mixing conditions, the average number of bits per letter transmitted from the encoder to the decoder (compression rate) goes to the conditional entropy H(X | Y) of X given Y asymptotically, and at the same time the average number of bits per letter transmitted from the decoder to the encoder (feedback rate) goes to 0 asymptotically. The algorithm and the corresponding analysis results are then extended to the case where both X and Y are to be encoded separately, but decoded jointly. Finally, a universal decoding algorithm is proposed to replace the joint typicality decoding, and the resulting universal compression algorithm consisting of the simple progressive encoder and the universal decoding algorithm is further shown to be asymptotically optimal for the class of all jointly memoryless source-side information pa- - irs (X,Y). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Side-Information Scalable Source Coding

    Page(s): 5591 - 5608
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (493 KB) |  | HTML iconHTML  

    We consider the problem of side-information scalable (SI-scalable) source coding, where the encoder constructs a two-layer description, such that the receiver with high quality side information will be able to use only the first layer to reconstruct the source in a lossy manner, while the receiver with low quality side information will have to receive both layers in order to decode. We provide inner and outer bounds to the rate-distortion (R-D) region for general discrete memoryless sources. The achievable region is tight when either one of the decoders requires a lossless reconstruction, and when the distortion measures are degraded and deterministic. Furthermore, the gap between the inner and the outer bounds can be bounded by certain constants when the squared error distortion measure is used. The notion of perfect scalability is introduced, for which necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-scalable coding and successive refinement Wyner-Ziv coding as basic building blocks, we provide a complete characterization of the rate-distortion region for the important quadratic Gaussian source with multiple jointly Gaussian side informations, where the side information quality is not necessarily monotonic along the scalable coding order. A partial result is provided for the doubly symmetric binary source under the Hamming distortion measure when the worse side information is a constant, for which one of the outer bounds is strictly tighter than the other. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scanning and Sequential Decision Making for Multidimensional Data—Part II: The Noisy Case

    Page(s): 5609 - 5631
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB) |  | HTML iconHTML  

    We consider the problem of sequential decision making for random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of scanning and sequentially filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance, and quantify the excess loss occurring when nonoptimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of scanning and prediction for noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to suboptimal scans are derived, and a universal prediction algorithm is suggested. This paper is the second part of a two-part paper. The first paper dealt with scanning and sequential decision making on noiseless data arrays. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Denoising of Discrete-Time Continuous-Amplitude Signals

    Page(s): 5632 - 5660
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3363 KB) |  | HTML iconHTML  

    We consider the problem of reconstructing a discrete-time signal (sequence) with continuous-valued components corrupted by a known memoryless channel. When performance is measured using a per-symbol loss function satisfying mild regularity conditions, we develop a sequence of denoisers that, although independent of the distribution of the underlying ldquocleanrdquo sequence, is universally optimal in the limit of large sequence length. This sequence of denoisers is universal in the sense of performing as well as any sliding-window denoising scheme which may be optimized for the underlying clean signal. Our results are initially developed in a ldquosemi-stochasticrdquo setting, where the noiseless signal is an unknown individual sequence, and the only source of randomness is due to the channel noise. It is subsequently shown that in the fully stochastic setting, where the noiseless sequence is a stationary stochastic process, our schemes universally attain optimum performance. The proposed schemes draw from nonparametric density estimation techniques and are practically implementable. We demonstrate efficacy of the proposed schemes in denoising Gray-scale images in the conventional additive white Gaussian noise (AWGN) setting, with additional promising results for less conventional noise distributions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability Results for Random Sampling of Sparse Trigonometric Polynomials

    Page(s): 5661 - 5670
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (297 KB) |  | HTML iconHTML  

    Recently, it has been observed that a sparse trigonometric polynomial, i.e., having only a small number of nonzero coefficients, can be reconstructed exactly from a small number of random samples using basis pursuit (BP) or orthogonal matching pursuit (OMP). In this paper, it is shown that recovery by a BP variant is stable under perturbation of the samples values by noise. A similar partial result for OMP is provided. For BP, in addition, the stability result is extended to (nonsparse) trigonometric polynomials that can be well approximated by sparse ones. The theoretical findings are illustrated by numerical experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering