Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. For technical support, please contact us at onlinesupport@ieee.org. We apologize for any inconvenience.
By Topic

Information Theory, IEEE Transactions on

Issue 7 • Date July 2014

Filter Results

Displaying Results 1 - 25 of 50
  • Table of contents

    Publication Year: 2014 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (175 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Theory publication information

    Publication Year: 2014 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (142 KB)  
    Freely Available from IEEE
  • A Theory of Network Equivalence – Part II: Multiterminal Channels

    Publication Year: 2014 , Page(s): 3709 - 3732
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2348 KB) |  | HTML iconHTML  

    A technique for bounding the capacities of networks of independent channels is introduced. Parts I and II treat point-to-point and multiterminal channels, respectively. Bounds are derived using a new tool called a bounding model. Channel 1 is an upper (lower) bounding model for channel 2 if replacing channel 2 by channel 1 in any network yields a new network whose capacity region is a superset (subset) of the capacity region of the original network. This paper derives bounding models from noiseless links, with lower bounding models corresponding to points in the channel's capacity region and upper bounding models corresponding to points in a new channel characterization called an emulation region. Replacing all channels in a network by their noiseless upper (lower) bounding models yields a network of lossless links whose capacity region is a superset (subset) of the capacity region for the original network. This converts a general (often stochastic) network into a network coding instance, enabling the application of tools and results derived in that domain. A channel's upper and lower bounding models differ when the channel can carry more information in some networks than in others. Bounding the difference between upper and lower bounding models bounds both the accuracy of the technique and the price of separating source-network coding from channel coding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Communication—Part II: Channels With Memory

    Publication Year: 2014 , Page(s): 3733 - 3747
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (895 KB) |  | HTML iconHTML  

    Consider communication over a channel whose probabilistic model is completely unknown vector-wise and is not assumed to be stationary. Communication over such channels is challenging because knowing the past does not indicate anything about the future. The existence of reliable feedback and common randomness is assumed. In a previous paper, it was shown that the Shannon capacity cannot be attained, in general, if the channel is not known. An alternative notion of capacity was defined, as the maximum rate of reliable communication by any block-coding system used over consecutive blocks. This rate was shown to be achievable for the modulo-additive channel with an individual, unknown noise sequence, and not achievable for some channels with memory. In this paper, this capacity is shown to be achievable for general channel models possibly including memory, as long as this memory fades with time. In other words, there exists a system with feedback and common randomness that, without knowledge of the channel, asymptotically performs as well as any block code, which may be designed knowing the channel. For channels in which memory does not fade with time, a weaker type of capacity is shown to be achievable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Marton’s Inner Bound for the General Broadcast Channel

    Publication Year: 2014 , Page(s): 3748 - 3762
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (438 KB) |  | HTML iconHTML  

    We establish several new results on Marton's inner bound on the capacity region of the general broadcast channel. Inspired by the fact that Marton's coding scheme without superposition coding is optimal in the Gaussian case, we consider the class of binary input degraded broadcast channels with no common message that have the same property. We characterize this class. We also establish new properties of Marton's inner bound that help restrict the search space for computing the Marton sum rate. In particular, we establish an extension of the XOR case of the binary inequality of Nair, Wang, and Geng. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity Bounds and Sum Rate Capacities of a Class of Discrete Memoryless Interference Channels

    Publication Year: 2014 , Page(s): 3763 - 3772
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (473 KB) |  | HTML iconHTML  

    This paper studies the capacity of a class of discrete memoryless interference channels (DMICs), where interference is defined analogous to that of a Gaussian interference channel with one-sided weak interference. The sum-rate capacity of this class of channels is determined. As with the Gaussian case, the sum-rate capacity is achieved by letting the transceiver pair subject to interference communicate at a rate such that its message can be decoded at the unintended receiver using single user detection. It is also established that this class of DMICs is equivalent in capacity region to certain degraded interference channels. This allows the construction of capacity outer-bounds using the capacity regions of associated degraded broadcast channels. The same technique is then used to determine the sum-rate capacity of DMICs with mixed interference as defined in this paper. The obtained capacity bounds and sum-rate capacities are used to resolve the capacities of several new DMICs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Entropy Power Inequality and Mrs. Gerber's Lemma for Groups of Order {2^{n}}

    Publication Year: 2014 , Page(s): 3773 - 3786
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (554 KB) |  | HTML iconHTML  

    Shannon's entropy power inequality can be viewed as characterizing the minimum differential entropy achievable by the sum of two independent random variables with fixed differential entropies. The entropy power inequality has played a key role in resolving a number of problems in information theory. It is therefore interesting to examine the existence of a similar inequality for discrete random variables. In this paper, we obtain an entropy power inequality for random variables taking values in a group of order 2n, i.e., for such a group G, we explicitly characterize the function fG(x, y) giving the minimum entropy of the sum of two independent G-valued random variables with respective entropies x and y. Random variables achieving the extremum in this inequality are thus the analogs of Gaussians in this case, and these are also determined. It turns out that fG(x, y) is convex in x for fixed y and, by symmetry, convex in y for fixed x. This is a generalization to groups of order 2n of the result known as Mrs. Gerber's Lemma. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Entropy Power Inequality for Integer-Valued Random Variables

    Publication Year: 2014 , Page(s): 3787 - 3796
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (402 KB) |  | HTML iconHTML  

    The entropy power inequality (EPI) yields lower bounds on the differential entropy of the sum of two independent real-valued random variables in terms of the individual entropies. Versions of the EPI for discrete random variables have been obtained for special families of distributions with the differential entropy replaced by the discrete entropy, but no universal inequality is known (beyond trivial ones). More recently, the sumset theory for the entropy function yields a sharp inequality H(X + X') - H(X) ≥ 1/2 - o(1) when X, X' are independent identically distributed (i.i.d.) with high entropy. This paper provides the inequality H(X + X') - H(X)≥ g(H(X)), where X, X' are arbitrary i.i.d. integer-valued random variables and where g is a universal strictly positive function on R+ satisfying g(0) = 0. Extensions to nonidentically distributed random variables and to conditional entropies are also obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rényi Divergence and Kullback-Leibler Divergence

    Publication Year: 2014 , Page(s): 3797 - 3820
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2624 KB) |  | HTML iconHTML  

    Rényi divergence is related to Rényi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings. It was introduced by Rényi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the Rényi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of Rényi divergence and Kullback- Leibler divergence, including convexity, continuity, limits of σ-algebras, and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finite p-Groups, Entropy Vectors, and the Ingleton Inequality for Nilpotent Groups

    Publication Year: 2014 , Page(s): 3821 - 3824
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (229 KB) |  | HTML iconHTML  

    In this paper, we study the capacity/entropy region of finite, directed, acyclic, multiple-sources, and multiple-sinks network by means of group theory and entropy vectors coming from groups. There is a one-to-one correspondence between the entropy vector of a collection of n random variables and a certain group-characterizable vector obtained from a finite group and n of its subgroups. We are looking at nilpotent group characterizable entropy vectors and show that they are all also abelian group characterizable, and hence they satisfy the Ingleton inequality. It is known that not all entropic vectors can be obtained from abelian groups, so our result implies that to get more exotic entropic vectors, one has to go at least to soluble groups or larger nilpotency classes. The result also implies that Ingleton inequality is satisfied by nilpotent groups of bounded class, depending on the order of the group. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Zero-Undetected-Error Capacity Approaches the Sperner Capacity

    Publication Year: 2014 , Page(s): 3825 - 3833
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (287 KB) |  | HTML iconHTML  

    Ahlswede, Cai, and Zhang proved that, in the noise-free limit, the zero-undetected-error capacity is lower bounded by the Sperner capacity of the channel graph, and they conjectured equality. Here, we derive an upper bound that proves the conjecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Upper and Lower Bounds to the Information Rate Transferred Through First-Order Markov Channels With Free-Running Continuous State

    Publication Year: 2014 , Page(s): 3834 - 3844
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB) |  | HTML iconHTML  

    Starting from the definition of mutual information, one promptly realizes that the probabilities inferred by Bayesian tracking can be used to compute the Shannon information between the state and the measurement of a dynamic system. In the Gaussian and linear case, the information rate can be evaluated from the probabilities computed by the Kalman filter. When the probability distributions inferred by Bayesian tracking are nontractable, one is forced to resort to approximated inference, which gives only an approximation to the wanted probabilities. We propose upper and lower bounds to the information rate between the hidden state and the measurement based on approximated inference. Application of these bounds to multiplicative communication channels is discussed, and experimental results for the discrete-time phase noise channel and for the Gauss-Markov fading channel are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate Lower Bounds on 2-D Constraint Capacities From Corner Transfer Matrices

    Publication Year: 2014 , Page(s): 3845 - 3858
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1682 KB) |  | HTML iconHTML  

    We analyse the capacity of several 2-D constraint families-the exclusion, coloring, parity, and charge model families. Using Baxter's corner transfer matrix formalism combined with the corner transfer matrix renormalization group method of Nishino and Okunishi, we calculate very tight lower bounds and estimates on the growth rates of these models. Our results strongly improve previous known lower bounds and lead to the surprising conjecture that the capacity of the even and charge(3) constraints are identical. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Improvement of the Gilbert–Varshamov Bound Over Nonprime Fields

    Publication Year: 2014 , Page(s): 3859 - 3861
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB) |  | HTML iconHTML  

    The Gilbert-Varshamov bound guarantees the existence of families of codes over the finite field F with good asymptotic parameters. We show that this bound can be improved for all nonprime fields F with ℓ ≥ 49 , except possibly ℓ = 125. We observe that the same improvement even holds within the class of transitive codes and within the class of self-orthogonal codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Improvement to Levenshtein's Upper Bound on the Cardinality of Deletion Correcting Codes

    Publication Year: 2014 , Page(s): 3862 - 3870
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB) |  | HTML iconHTML  

    We consider deletion correcting codes over a q-ary alphabet. It is well known that any code capable of correcting s deletions can also correct any combination of s total insertions and deletions. To obtain asymptotic upper bounds on code size, we apply a packing argument to channels that perform different mixtures of insertions and deletions. Even though the set of codes is identical for all of these channels, the bounds that we obtain vary. Prior to this paper, only the bounds corresponding to the all-insertion case and the all-deletion case were known. We recover these as special cases. The bound from the all-deletion case, due to Levenshtein, has been the best known for more than 45 years. Our generalized bound is better than Levenshtein's bound whenever the number of deletions to be corrected is larger than the alphabet size. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Torsion Limits and Riemann-Roch Systems for Function Fields and Applications

    Publication Year: 2014 , Page(s): 3871 - 3888
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (553 KB) |  | HTML iconHTML  

    The Ihara limit (or constant) A(q) has been a central problem of study in the asymptotic theory of global function fields (or equivalently, algebraic curves over finite fields). It addresses global function fields with many rational points and, so far, most applications of this theory do not require additional properties. Motivated by recent applications, we require global function fields with the additional property that their zero class divisor groups contain at most a small number of d -torsion points. We capture this with the notion of torsion limit, a new asymptotic quantity for global function fields. It seems that it is even harder to determine values of this new quantity than the Ihara constant. Nevertheless, some nontrivial upper bounds are derived. Apart from this new asymptotic quantity and bounds on it, we also introduce Riemann-Roch systems of equations. It turns out that this type of equation system plays an important role in the study of several other problems in each of these areas: arithmetic secret sharing, symmetric bilinear complexity of multiplication in finite fields, frameproof codes, and the theory of error correcting codes. Finally, we show how our new asymptotic quantity, our bounds on it and Riemann-Roch systems can be used to improve results in these areas. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Erasure List-Decodable Codes From Random and Algebraic Geometry Codes

    Publication Year: 2014 , Page(s): 3889 - 3894
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (183 KB) |  | HTML iconHTML  

    Erasure list decoding was introduced to correct a larger number of erasures by outputting a list of possible candidates. In this paper, we consider both random linear codes and algebraic geometry codes for list decoding from erasures. The contributions of this paper are twofold. First, for arbitrary 0 <; R <; 1 and ϵ > 0 (R and ϵ are independent), we show that with high probability a q-ary random linear code of rate R is an erasure list-decodable code with constant list size qO(1/ϵ) that can correct a fraction 1 - R - ϵ of erasures, i.e., a random linear code achieves the information-theoretic optimal tradeoff between information rate and fraction of erasures. Second, we show that algebraic geometry codes are good erasure list-decodable codes. Precisely speaking, a q-ary algebraic geometry code of rate R from the Garcia-Stichtenoth tower can correct 1 - R - (1/√q - 1) + (1/q) - ϵ fraction of erasures with list size O(1/ϵ). This improves the Johnson bound for erasures applied to algebraic geometry codes. Furthermore, list decoding of these algebraic geometry codes can be implemented in polynomial time. Note that the code alphabet size q in this paper is constant and independent of ϵ. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hamming Weights of the Duals of Cyclic Codes With Two Zeros

    Publication Year: 2014 , Page(s): 3895 - 3902
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (982 KB) |  | HTML iconHTML  

    Cyclic codes are an interesting type of linear codes and have wide applications in communication and storage systems due to their efficient encoding and decoding algorithms. In this paper, let Fr be a finite field with r = qm. Suppose that g1, g2 ∈ F*r are not conjugates over Fq, ord(g1) = n1, ord(g2) = n2, d = gcd(n1, n2), and n = n1n2/d. Let Fq(g1) = Fqm1 , Fq(g2) = Fqm2 , and Ti denote the trace function from Fqmi to Fq for i = 1, 2. We define a cyclic code C(q,m,n1,n2) = {c(a, b) : a ∈ Fqm1 , b ∈ Fqm2 }, where c(a, b) = (T1(ag01) + T2(bg02), T1(ag11) + T2(bg12), ... , T1(agn-11 ) + T2(bgn-12 )). We mainly use Gauss periods to present the weight distribution of the cyclic code C(q,m,n1,n2). As applications, we determine the weight distribution of cyclic code C(q,m,qm1-1,qm2-1) with gcd(m1, m2) = 1; in particular, it is a three-weight cyclic code if gcd(q -1, m1 -m2) = 1. We also explicitly determine the weight distributions of some classes of cyclic codes including several classes of four-weight cyclic codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Weight Distribution of Cyclic Codes With Niho Exponents

    Publication Year: 2014 , Page(s): 3903 - 3912
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (871 KB) |  | HTML iconHTML  

    Recently, there has been intensive research on the weight distributions of cyclic codes. In this paper, we compute the weight distributions of three classes of cyclic codes with Niho exponents. More specifically, we obtain two classes of binary three-weight and four-weight cyclic codes and a class of nonbinary four-weight cyclic codes. The weight distributions follow from the determination of value distributions of certain exponential sums. Several examples are presented to show that some of our codes are optimal and some have the best known parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-Binary Protograph-Based LDPC Codes: Enumerators, Analysis, and Designs

    Publication Year: 2014 , Page(s): 3913 - 3941
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5013 KB) |  | HTML iconHTML  

    This paper provides a comprehensive analysis of nonbinary low-density parity check (LDPC) codes built out of protographs. We consider both random and constrained edge-weight labeling, and refer to the former as the unconstrained nonbinary protograph-based LDPC codes (U-NBPB codes) and to the latter as the constrained nonbinary protograph-based LDPC codes (C-NBPB codes). Equipped with combinatorial definitions extended to the nonbinary domain, ensemble enumerators of codewords, trapping sets, stopping sets, and pseudocodewords are calculated. The exact enumerators are presented in the finite-length regime, and the corresponding growth rates are calculated in the asymptotic regime. An EXIT chart tool for computing the iterative decoding thresholds of protograph-based LDPC codes is presented, followed by several examples of finite-length U-NBPB and C-NBPB codes with high performance. Throughout this paper, we provide accompanying examples, which demonstrate the advantage of nonbinary protograph-based LDPC codes over their binary counterparts and over random constructions. The results presented in this paper advance the analytical toolbox of nonbinary graph-based codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Short Q-Ary Fixed-Rate WOM Codes for Guaranteed Rewrites and With Hot/Cold Write Differentiation

    Publication Year: 2014 , Page(s): 3942 - 3958
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1866 KB) |  | HTML iconHTML  

    To the body of works on rewrite codes for constrained memories, we add a comprehensive study in a direction that is especially relevant to practical storage. The subject of this paper is codes for the q-ary extension of the write-once memories model, with input sizes that are fixed throughout the write sequence. Seven code constructions are given with guarantees on the number of writes they can support. For the parameters addressed by the constructions, we also prove upper bounds on the number of writes, which prove the optimality of three of the constructions. We concentrate on codes with short block lengths to keep the complexity of decoding and updates within the feasibility of practical implementation. Even with these short blocks the constructed codes are shown to be within a small additive constant from capacity for an arbitrarily large number of input bits. Part of the study addresses a new rewrite model where some of the input bits can be updated multiple times in a write sequence (hot bits), while other are updated at most once (cold bits). We refer to this new model as hot/cold rewrite codes. It is shown that adding cold bits to a rewrite code has a negligible effect on the total number of writes, while adding an important feature of leveling the physical wear of memory cells between hot and cold input data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Complexity of Approximating a Bethe Equilibrium

    Publication Year: 2014 , Page(s): 3959 - 3969
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    This paper resolves a common complexity issue in the Bethe approximation of statistical physics and the belief propagation (BP) algorithm of artificial intelligence. The Bethe approximation and the BP algorithm are heuristic methods for estimating the partition function and marginal probabilities in graphical models, respectively. The computational complexity of the Bethe approximation is decided by the number of operations required to solve a set of nonlinear equations, the so-called Bethe equation. Although the BP algorithm was inspired and developed independently, Yedidia, Freeman, and Weiss showed that the BP algorithm solves the Bethe equation if it converges (however, it often does not). This naturally motivates the following question to understand limitations and empirical successes of the Bethe and BP methods: is the Bethe equation computationally easy to solve? We present a message-passing algorithm solving the Bethe equation in a polynomial number of operations for general binary graphical models of n variables, where the maximum degree in the underlying graph is O(logn). Equivalently, it finds a stationary point of the Bethe free energy function. Our algorithm can be used as an alternative to BP fixing its convergence issue and is the first fully polynomial-time approximation scheme for the BP fixed-point computation in such a large class of graphical models, whereas the approximate fixed-point computation is known to be polynomial parity arguments on directed graphs (PPAD-)hard in general. We believe that our technique is of broader interest to understand the computational complexity of the cavity method in statistical physics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sparse Recovery With Unknown Variance: A LASSO-Type Approach

    Publication Year: 2014 , Page(s): 3970 - 3988
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (905 KB) |  | HTML iconHTML  

    We address the issue of estimating the regression vector β in the generic s-sparse linear model y = Xβ + z, with β ∈ ℝp, y ∈ ℝn, z ~ )V (0, σ2I), and p > n when the variance σ2 is unknown. We study two least absolute shrinkage and selection operator (LASSO)-type methods that jointly estimate β and the variance. These estimators are minimizers of the l1 penalized least-squares functional, where the relaxation parameter is tuned according to two different strategies. In the first strategy, the relaxation parameter is of the order σ̂√log p, where σ̂2 is the empirical variance. In the second strategy, the relaxation parameter is chosen so as to enforce a tradeoff between the fidelity and the penalty terms at optimality. For both estimators, our assumptions are similar to the ones proposed by Candès and Plan in Ann. Stat. (2009), for the case where σ2 is known. We prove that our estimators ensure exact recovery of the support and sign pattern of β with high probability. We present simulation results showing that the first estimator enjoys nearly the same performances in practice as the standard LASSO (known variance case) for a wide range of the signal-to-noise ratio. Our second estimator is shown to outperform both in terms of false detection, when the signal-to-noise ratio is low. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sparse Approximation and Recovery by Greedy Algorithms

    Publication Year: 2014 , Page(s): 3989 - 4000
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    We study sparse approximation by greedy algorithms. Our contribution is twofold. First, we prove exact recovery with high probability of random K-sparse signals within ΓK(1+ε)l iterations of the orthogonal matching pursuit (OMP). This result shows that in a probabilistic sense, the OMP is almost optimal for exact recovery. Second, we prove the Lebesgue-type inequalities for the weak Chebyshev greedy algorithm, a generalization of the weak orthogonal matching pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. However, even in the case of a Hilbert space, our results add some new elements to known results on the Lebesgue-type inequalities for the restricted isometry property dictionaries. Our technique is a development of the recent technique created by Zhang. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Near-Optimal Adaptive Compressed Sensing

    Publication Year: 2014 , Page(s): 4001 - 4012
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1644 KB) |  | HTML iconHTML  

    This paper proposes a simple adaptive sensing and group testing algorithm for sparse signal recovery. The algorithm, termed compressive adaptive sense and search (CASS), is shown to be near-optimal in that it succeeds at the lowest possible signal-to-noise-ratio (SNR) levels, improving on previous work in adaptive compressed sensing. Like traditional compressed sensing based on random nonadaptive design matrices, the CASS algorithm requires only k log n measurements to recover a k-sparse signal of dimension n. However, CASS succeeds at SNR levels that are a factor log n less than required by standard compressed sensing. From the point of view of constructing and implementing the sensing operation as well as computing the reconstruction, the proposed algorithm is substantially less computationally intensive than standard compressed sensing. The CASS is also demonstrated to perform considerably better in practice through simulation. To the best of our knowledge, this is the first demonstration of an adaptive compressed sensing algorithm with near-optimal theoretical guarantees and excellent practical performance. This paper also shows that methods like compressed sensing, group testing, and pooling have an advantage beyond simply reducing the number of measurements or tests- adaptive versions of such methods can also improve detection and estimation performance when compared with nonadaptive direct (uncompressed) sensing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering