By Topic

Information Theory, IEEE Transactions on

Issue 6 • Date June 2012

Filter Results

Displaying Results 1 - 25 of 62
  • Table of contents

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (166 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Theory publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • Fixed-Length Lossy Compression in the Finite Blocklength Regime

    Page(s): 3309 - 3338
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1232 KB) |  | HTML iconHTML  

    This paper studies the minimum achievable source coding rate as a function of blocklength n and probability ϵ that the distortion exceeds a given level d . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by R(d) + √V(d)/(n) Q-1(ϵ), where R(d) is the rate-distortion function, V(d) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and Q-1(·) is the inverse of the standard Gaussian complementary cumulative distribution function. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cascade and Triangular Source Coding With Side Information at the First Two Nodes

    Page(s): 3339 - 3349
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (477 KB) |  | HTML iconHTML  

    We consider the cascade and triangular rate-distortion problem where side information is known to the source encoder and to the first user but not to the second user. We characterize the rate-distortion region for these problems, as well as some of their extensions. For the quadratic Gaussian case, we show that it is sufficient to consider jointly Gaussian distributions, which leads to an explicit solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Generalized Gaussian CEO Problem

    Page(s): 3350 - 3372
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (746 KB) |  | HTML iconHTML  

    This paper considers a distributed source coding (DSC) problem where L encoders observe noisy linear combinations of K correlated remote Gaussian sources, and separately transmit the compressed observations to the decoder to reconstruct the remote sources subject to a sum-distortion constraint. This DSC problem is referred to as the generalized Gaussian CEO problem since it can be viewed as a generalization of the quadratic Gaussian CEO problem where the number of remote source K=1. First, we provide a new outer region obtained using the entropy power inequality and an equivalent argument (in the sense of having the same rate-distortion region and Berger-Tung inner region) among a certain class of generalized Gaussian CEO problems. We then give two sufficient conditions for our new outer region to match the inner region achieved by Berger-Tung schemes, where the second matching condition implies that in the low-distortion regime, the Berger-Tung inner rate region is always tight, while in the high-distortion regime, the same region is tight if a certain condition holds. The sum-rate part of the outer region is also studied and shown to meet the Berger-Tung sum-rate upper bound under a certain condition, which is obtained using the Karush-Kuhn-Tucker conditions of the underlying convex semidefinite optimization problem, and is in general weaker than the aforesaid two for rate region tightness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secret Key Generation for Correlated Gaussian Sources

    Page(s): 3373 - 3391
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (494 KB) |  | HTML iconHTML  

    Secret key generation by multiple terminals is considered based on their observations of jointly distributed Gaussian signals, followed by public communication among themselves. Exploiting an inherent connection between secrecy generation and lossy data compression, two main contributions are made. The first is a characterization of strong secret key capacity, and entails a converse proof technique that is valid for real-valued (and not necessarily Gaussian) as well as finite-valued signals. The capacity formula acquires a simple form when the terminals observe “symmetrically correlated” jointly Gaussian signals. For the latter setup with two terminals, considering schemes that involve quantization at one terminal, the best rate of an achievable secret key is characterized as a function of quantization rate; secret key capacity is attained as the quantization rate tends to infinity. Structured codes are shown to attain the optimum tradeoff between secret key rate and quantization rate, constituting our second main contribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixing, Ergodic, and Nonergodic Processes With Rapidly Growing Information Between Blocks

    Page(s): 3392 - 3401
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    We construct mixing processes over an infinite alphabet and ergodic processes over a finite alphabet for which Shannon mutual information between adjacent blocks of length n grows as nβ , where β ∈ (0,1) . The processes are a modification of nonergodic Santa Fe processes, which were introduced in the context of natural language modeling. The rates of mutual information for the latter processes are alike and also established in this paper. As an auxiliary result, it is shown that infinite direct products of mixing processes are also mixing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relations Between Redundancy Patterns of the Shannon Code and Wave Diffraction Patterns of Partially Disordered Media

    Page(s): 3402 - 3406
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1334 KB) |  | HTML iconHTML  

    The average redundancy of the Shannon code, Rn, as a function of the block length n, is known to exhibit two very different types of behavior, depending on the rationality or irrationality of certain parameters of the source: It either converges to 1/2 as n grows without bound, or it may have a nonvanishing, oscillatory, (quasi-) periodic pattern around the value 1/2 for all large n. In this paper, we make an attempt to shed some insight into this erratic behavior of Rn, by drawing an analogy with the realm of physics of wave propagation, in particular, the elementary theory of scattering and diffraction. It turns out that there are two types of behavior of wave diffraction patterns formed by crystals, which are correspondingly analogous to the two types of patterns of Rn. When the crystal is perfect, the diffraction intensity spectrum exhibits very sharp peaks, a.k.a. Bragg peaks, at wavelengths of full constructive interference. These wavelengths correspond to the frequencies of the harmonic waves of the oscillatory mode of Rn . On the other hand, when the crystal is imperfect and there is a considerable degree of disorder in its structure, the Bragg peaks disappear, and the behavior of this mode is analogous to the one where Rn is convergent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Function Computation in Directed and Undirected Graphs

    Page(s): 3407 - 3418
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (306 KB) |  | HTML iconHTML  

    We consider the problem of information aggregation in sensor networks, where one is interested in computing a function of the sensor measurements. We allow for block processing and study in-network function computation in directed graphs and undirected graphs. We study how the structure of the function affects the encoding strategies and the effect of interactive information exchange. Depending on the application, there could be a designated collector node, or every node might want to compute the function. We begin by considering a directed graph C = (γ. ε) on the sensor nodes, where the goal is to determine the optimal encoders on each edge which achieve function computation at the collector node. Our goal is to characterize the rate region in R|ε|, i.e., the set of points for which there exist feasible encoders with given rates which achieve zero-error computation for asymptotically large block length. We determine the solution for directed trees, specifying the optimal encoder and decoder for each edge. For general directed acyclic graphs, we provide an outer bound on the rate region by finding the disambiguation requirements for each cut, and describe examples where this outer bound is tight. Next, we address the scenario where nodes are connected in an undirected tree network, and every node wishes to compute a given symmetric Boolean function of the sensor data. Undirected edges permit interactive computation, and we therefore study the effect of interaction on the aggregation and communication strategies. We focus on sum-threshold functions and determine the minimum worst case total number of bits to be exchanged on each edge. The optimal strategy involves recursive in-network aggregation which is reminiscent of message passing. In the case of general graphs, we present a cut-set lower bound and an achievable scheme based on aggregation along trees. For complete graphs, we prove that the complexity of this scheme is no more- than twice that of the optimal scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secret Writing on Dirty Paper: A Deterministic View

    Page(s): 3419 - 3429
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (318 KB) |  | HTML iconHTML  

    Recently, there has been a lot of success in using the deterministic approach to provide approximate characterization of Gaussian network capacity. In this paper, we take a deterministic view and revisit the problem of wiretap channel with side information. A precise characterization of the secrecy capacity is obtained for a linear deterministic model, which naturally suggests a coding scheme which we show to achieve the secrecy capacity of the degraded Gaussian model (dubbed as “secret writing on dirty paper”) to within half a bit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity Region of Finite State Multiple-Access Channels With Delayed State Information at the Transmitters

    Page(s): 3430 - 3452
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8430 KB) |  | HTML iconHTML  

    A single-letter characterization is provided for the capacity region of finite-state multiple access channels. The channel state is a Markov process, the transmitters have access to delayed state information, and channel state information is available at the receiver. The delays of the channel state information are assumed to be asymmetric at the transmitters. We apply the result to obtain the capacity region for a finite-state Gaussian MAC, and for a finite-state multiple-access fading channel. We derive power control strategies that maximize the capacity region for these channels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fading Broadcast Channels With State Information at the Receivers

    Page(s): 3453 - 3471
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4575 KB) |  | HTML iconHTML  

    Despite considerable progress, the capacity region of fading broadcast channels with channel state known at the receivers but unknown at the transmitter remains unresolved. We address this subject by introducing a layered erasure broadcast channel model in which each component channel has a state that specifies the received signal levels in an instance of a deterministic binary expansion channel. We find the capacity region of this class of broadcast channels. The capacity achieving strategy assigns each signal level to the user that derives the maximum weighted expected rate. The outer bound is based on a channel enhancement that creates a degraded broadcast channel for which the capacity region is known. This same approach is then used to find inner and outer bounds to the capacity region of fading Gaussian broadcast channels. The achievability scheme employs a superposition of binary inputs. For intermittent additive white Gaussian noise (AWGN) channels and for Rayleigh fading channels, the achievable rates are observed to be within 1-2 bits of the outer bound at high SNR. We also prove that the achievable rate region is within 6.386 bits/s/Hz of the capacity region for all fading AWGN broadcast channels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity Region of Vector Gaussian Interference Channels With Generally Strong Interference

    Page(s): 3472 - 3496
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7466 KB) |  | HTML iconHTML  

    An interference channel is said to have strong interference if a certain pair of mutual information inequalities are satisfied for all input distributions. These inequalities assure that the capacity of the interference channel with strong interference is achieved by jointly decoding the signal and the interference. This definition of strong interference applies to discrete memoryless, scalar and vector Gaussian interference channels. However, there exist vector Gaussian interference channels that may not satisfy the strong interference condition but for which the capacity can still be achieved by jointly decoding the signal and the interference. This kind of interference is called generally strong interference. Sufficient conditions for a vector Gaussian interference channel to have generally strong interference are derived. The sum-rate capacity and the boundary points of the capacity region are also determined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Conditions for Linearity of Optimal Estimation

    Page(s): 3497 - 3508
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (503 KB) |  | HTML iconHTML  

    When is optimal estimation linear? It is well known that when a Gaussian source is contaminated with Gaussian noise, a linear estimator minimizes the mean square estimation error. This paper analyzes, more generally, the conditions for linearity of optimal estimators. Given a noise (or source) distribution, and a specified signal-to-noise ratio (SNR), we derive conditions for existence and uniqueness of a source (or noise) distribution for which the Lp optimal estimator is linear. We then show that if the noise and source variances are equal, then the matching source must be distributed identically to the noise. Moreover, we prove that the Gaussian source-channel pair is unique in the sense that it is the only source-channel pair for which the mean square error (MSE) optimal estimator is linear at more than one SNR values. Furthermore, we show the asymptotic linearity of MSE optimal estimators for low SNR if the channel is Gaussian regardless of the source and, vice versa, for high SNR if the source is Gaussian regardless of the channel. The extension to the vector case is also considered where besides the conditions inherited from the scalar case, additional constraints must be satisfied to ensure linearity of the optimal estimator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Random Action of Compact Lie Groups and Minimax Estimation of a Mean Pattern

    Page(s): 3509 - 3520
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    This paper considers the problem of estimating a mean pattern in the setting of Grenander's pattern theory. Shape variability in a dataset of curves or images is modeled by the random action of elements in a compact Lie group on an infinite dimensional space. In the case of observations contaminated by an additive Gaussian white noise, it is shown that estimating a reference template in the setting of Grenanders pattern theory falls into the category of deconvolution problems over Lie groups. To obtain this result, we build an estimator of a mean pattern by using Fourier deconvolution and harmonic analysis on compact Lie groups. In an asymptotic setting where the number of observed curves or images tends to infinity, we derive upper and lower bounds for the minimax quadratic risk over Sobolev balls. This rate depends on the smoothness of the density of the random Lie group elements representing shape variability in the data, which makes a connection between estimating a mean pattern and standard deconvolution problems in nonparametric statistics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extrinsic Mean of Brownian Distributions on Compact Lie Groups

    Page(s): 3521 - 3535
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3799 KB) |  | HTML iconHTML  

    This paper studies Brownian distributions on compact Lie groups. These are defined as the marginal distributions of Brownian processes and are intended as a natural extension of the well-known normal distributions to compact Lie groups. It is shown that this definition preserves key properties of normal distributions. In particular, Brownian distributions transform in a nice way under group operations and satisfy an extension of the central limit theorem. Brownian distributions on a compact Lie group G belong to one of two parametric families NL(g,C) and NR(g,C)-gG and C a positive-definite symmetric matrix. In particular, the parameter g appears as a location parameter. An approach based on the extrinsic mean for estimation of the parameters g and C is studied in detail. It is shown that g is the unique extrinsic mean for a Brownian distribution NL(g,C) or NR(g,C). Resulting estimates are proved to be consistent and asymptotically normal. While they may also be used to simultaneously estimate g and C, it is seen this requires that G be embedded into a higher dimensional matrix Lie group. Going beyond Brownian distributions, it is shown the extrinsic mean can be used to recover the location parameter for a wider class of distributions arising more generally from Lévy processes. The compact Lie group structure places limitations on the analogy between normal distributions and Brownian distributions. This is illustrated by the study of multivariate Brownian distributions. These are introduced as Brownian distributions on some product group-e.g., G ×G. This paper describes their covariance structure and considers its transformation under group operations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized Framework for the Level Crossing Analysis of Ordered Random Processes

    Page(s): 3536 - 3547
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (603 KB) |  | HTML iconHTML  

    This paper investigates the following general problem relating to ordered random processes: given n independent but not necessarily identical random processes, how frequently, on average, does any given process become one of the pth (p=1,2,..., n-1) largest processes? This is a fundamental problem arising in the design and analysis of contemporary multidimensional wireless communication systems (e.g., multiantenna, multiuser) employing opportunistic selection. We formulate this problem as one involving the level crossing rate (LCR) of a carefully defined ordered random process across the zero threshold, which we solve by developing a new mathematical framework based on the theory of permanents. For the case where the processes correspond to time-varying Rayleigh fading channels, we present exact closed-form formulas for the LCR, simplified tight upper bounds, as well as asymptotic results for n and p approaching infinity but with fixed ratio. These results reveal interesting fundamental limits for the LCR, and are shown to given meaningful insight even for small values of n and p . We further use our mathematical framework to characterize the required per-branch and overall switching rate of a generalized selection combining diversity receiver, allowing for different average powers for each branch. With the aid of majorization theory, we demonstrate that the overall switching rate is maximized when the power delay profile is uniform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error Probability Bounds for Balanced Binary Relay Trees

    Page(s): 3548 - 3563
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (804 KB) |  | HTML iconHTML  

    We study the detection error probability associated with a balanced binary relay tree, where the leaves of the tree correspond to N identical and independent sensors. The root of the tree represents a fusion center that makes the overall detection decision. Each of the other nodes in the tree is a relay node that combines two binary messages to form a single output binary message. Only the leaves are sensors. In this way, the information from the sensors is aggregated into the fusion center via the relay nodes. In this context, we describe the evolution of the Type I and Type II error probabilities of the binary data as it propagates from the leaves toward the root. Tight upper and lower bounds for the total error probability at the fusion center as functions of N are derived. These characterize how fast the total error probability converges to 0 with respect to N , even if the individual sensors have error probabilities that converge to 1/2. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Group-Ordered SPRT for Decentralized Detection

    Page(s): 3564 - 3574
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3369 KB) |  | HTML iconHTML  

    The problem of decentralized detection in a large wireless sensor network is considered. An adaptive decentralized detection scheme, group-ordered sequential probability ratio test (GO-SPRT), is proposed. This scheme groups sensors according to the informativeness of their data. Fusion center collects sensor data sequentially, starting from the most informative data and terminates the process when the target performance is reached. Wald's approximations are shown to be applicable even though the problem setting deviates from that of the traditional sequential probability ratio test (SPRT). To analyze the efficiency of GO-SPRT, the asymptotic equivalence between the average sample number of GO-SPRT, which is a function of a multinomial random variable, and a function of a normal random variable, is established. Closed-form approximations for the average sample number are then obtained. Compared with fixed sample size test and traditional SPRT, the proposed scheme achieves significant savings in the cost of data fusion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Parameter Estimation in Sensor Networks: Nonlinear Observation Models and Imperfect Communication

    Page(s): 3575 - 3605
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9323 KB) |  | HTML iconHTML  

    The paper studies distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy intersensor communication. It introduces separably estimable observation models that generalize the observability condition in linear centralized estimation to nonlinear distributed estimation. It studies two distributed estimation algorithms in separably estimable models, the NU (with its linear counterpart LU) and the NLU. Their update rule combines a consensus step (where each sensor updates the state by weight averaging it with its neighbors' states) and an innovation step (where each sensor processes its local current observation). This makes the three algorithms of the consensus + innovations type, very different from traditional consensus. This paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value), efficiency, and asymptotic unbiasedness. For LU and NU, it proves asymptotic normality and provides convergence rate guarantees. The three algorithms are characterized by appropriately chosen decaying weight sequences. Algorithms LU and NU are analyzed in the framework of stochastic approximation theory; algorithm NLU exhibits mixed time-scale behavior and biased perturbations, and its analysis requires a different approach that is developed in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on the Bayes Error Given Moments

    Page(s): 3606 - 3612
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (197 KB) |  | HTML iconHTML  

    We show how to compute lower bounds for the supremum Bayes error if the class-conditional distributions must satisfy moment constraints, where the supremum is with respect to the unknown class-conditional distributions. Our approach makes use of Curto and Fialkow's solutions for the truncated moment problem. The lower bound shows that the popular Gaussian assumption is not robust in this regard. We also construct an upper bound for the supremum Bayes error by constraining the decision boundary to be linear. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subspace Methods for Joint Sparse Recovery

    Page(s): 3613 - 3641
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1459 KB) |  | HTML iconHTML  

    We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUltiple SIgnal Classification (MUSIC) algorithm, originally proposed by Schmidt for the direction of arrival estimation problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank defect or ill-conditioning. This situation arises with a limited number of measurement vectors, or with highly correlated signal components. In this case, MUSIC fails and, in practice, none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC such that the support is reliably recovered under such unfavorable conditions. Combined with a subspace-based greedy algorithm, known as Orthogonal Subspace Matching Pursuit, which is also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of the restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation step, which has been missing in the previous studies of MUSIC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconstruction of Binary Functions and Shapes From Incomplete Frequency Information

    Page(s): 3642 - 3653
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1246 KB) |  | HTML iconHTML  

    The characterization of a binary function by partial frequency information is considered. We show that it is possible to reconstruct binary signals from incomplete frequency measurements via the solution of a simple linear optimization problem. We further prove that if a binary function is spatially structured (e.g., a general black-white image or an indicator function of a shape), then it can be recovered from very few low frequency measurements in general. These results would lead to efficient methods of sensing, characterizing and recovering a binary signal or a shape as well as other applications like deconvolution of binary functions blurred by a low-pass filter. Numerical results are provided to demonstrate the theoretical arguments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Remark on the Restricted Isometry Property in Orthogonal Matching Pursuit

    Page(s): 3654 - 3656
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (481 KB) |  | HTML iconHTML  

    This paper demonstrates that if the restricted isometry constant δK+1 of the measurement matrix A satisfies [δK+1 <; 1 √K+1] then a greedy algorithm called Orthogonal Matching Pursuit (OMP) can recover every K-sparse signal x in K iterations from Ax. By contrast, a matrix is also constructed with the restricted isometry constant [δK+1 = 1 √K] such that OMP can not recover some K-sparse signal x in K iterations. This result positively verifies the conjecture given by Dai and Milenkovic in 2009. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Infinitely Many Constrained Inequalities for the von Neumann Entropy

    Page(s): 3657 - 3663
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1920 KB) |  | HTML iconHTML  

    We exhibit infinitely many new, constrained inequalities for the von Neumann entropy, and show that they are independent of each other and the known inequalities obeyed by the von Neumann entropy (basically strong subadditivity). The new inequalities were proved originally by Makarychev for the Shannon entropy, using properties of probability distributions. Our approach extends the proof of the inequalities to the quantum domain, and includes their independence for the quantum and also the classical cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering