By Topic

Information Theory, IEEE Transactions on

Issue 10 • Date Oct. 2006

Filter Results

Displaying Results 1 - 25 of 44
  • Table of contents

    Publication Year: 2006 , Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Theory publication information

    Publication Year: 2006 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • On the Distributed Compression of Quantum Information

    Publication Year: 2006 , Page(s): 4349 - 4357
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Slepian-Wolf Coded Nested Lattice Quantization for Wyner-Ziv Coding: High-Rate Performance Analysis and Code Design

    Publication Year: 2006 , Page(s): 4358 - 4379
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1646 KB) |  | HTML iconHTML  

    Nested lattice quantization provides a practical scheme for Wyner-Ziv coding. This paper examines the high-rate performance of nested lattice quantizers and gives the theoretical performance for general continuous sources. In the quadratic Gaussian case, as the rate increases, we observe an increasing gap between the performance of finite-dimensional nested lattice quantizers and the Wyner-Ziv distortion-rate function. We argue that this is because the boundary gain decreases as the rate of the nested lattice quantizers increases. To increase the boundary gain and ultimately boost the overall performance, a new practical Wyner-Ziv coding scheme called Slepian-Wolf coded nested lattice quantization (SWC-NQ) is proposed, where Slepian-Wolf coding is applied to the quantization indices of the source for the purpose of compression with side information at the decoder. Theoretical analysis shows that for the quadratic Gaussian case and at high rate, SWC-NQ performs the same as conventional entropy-coded lattice quantization with the side information available at both the encoder and the decoder. Furthermore, a nonlinear minimum mean-square error (MSE) estimator is introduced at the decoder, which is theoretically proven to degenerate to the linear minimum MSE estimator at high rate and experimentally shown to outperform the linear estimator at low rate. Practical designs of one- and two-dimensional nested lattice quantizers together with multilevel low-density parity-check (LDPC) codes for Slepian-Wolf coding give performance close to the theoretical limits of SWC-NQ View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Source Coding for Quasiarithmetic Penalties

    Publication Year: 2006 , Page(s): 4380 - 4393
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (443 KB) |  | HTML iconHTML  

    Whereas Huffman coding finds a prefix code minimizing mean codeword length for a given finite-item probability distribution, quasiarithmetic or quasilinear coding problems have the goal of minimizing a generalized mean of the form rho-1(Sigmaipirho(li )), where li denotes the length of the ith codeword, p i denotes the corresponding probability, and rho is a monotonically increasing cost function. Such problems, proposed by Campbell, have a number of diverse applications. Several cost functions are shown here to yield quasiarithmetic problems with simple redundancy bounds in terms of a generalized entropy. A related property, also shown here, involves the existence of optimal codes: For "well-behaved" cost functions, optimal codes always exist for (possibly infinite-alphabet) sources having finite generalized entropy. An algorithm is introduced for finding binary codes optimal for convex cost functions. This algorithm, which can be extended to other minimization utilities, can be performed using quadratic time and linear space. This reduces the computational complexity of a problem involving minimum delay in a queue, allows combinations of previously considered problems to be optimized, and greatly expands the set of problems solvable in quadratic time and linear space View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Divergences and Informations in Statistics and Information Theory

    Publication Year: 2006 , Page(s): 4394 - 4412
    Cited by:  Papers (36)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (526 KB) |  | HTML iconHTML  

    The paper deals with the f-divergences of Csiszar generalizing the discrimination information of Kullback, the total variation distance, the Hellinger divergence, and the Pearson divergence. All basic properties of f-divergences including relations to the decision errors are proved in a new manner replacing the classical Jensen inequality by a new generalized Taylor expansion of convex functions. Some new properties are proved too, e.g., relations to the statistical sufficiency and deficiency. The generalized Taylor expansion also shows very easily that all f-divergences are average statistical informations (differences between prior and posterior Bayes errors) mutually differing only in the weights imposed on various prior distributions. The statistical information introduced by De Groot and the classical information of Shannon are shown to be extremal cases corresponding to alpha=0 and alpha=1 in the class of the so-called Arimoto alpha-informations introduced in this paper for 0<alpha<1 by means of the Arimoto alpha-entropies. Some new examples of f-divergences are introduced as well, namely, the Shannon divergences and the Arimoto alpha-divergences leading for alphauarr1 to the Shannon divergences. Square roots of all these divergences are shown to be metrics satisfying the triangle inequality. The last section introduces statistical tests and estimators based on the minimal f-divergence with the empirical distribution achieved in the families of hypothetic distributions. For the Kullback divergence this leads to the classical likelihood ratio test and estimator View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Random Linear Network Coding Approach to Multicast

    Publication Year: 2006 , Page(s): 4413 - 4430
    Cited by:  Papers (774)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB) |  | HTML iconHTML  

    We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unifying Views of Tail-Biting Trellis Constructions for Linear Block Codes

    Publication Year: 2006 , Page(s): 4431 - 4443
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB) |  | HTML iconHTML  

    In this paper, we present new ways of describing and constructing linear tail-biting trellises for block codes. We extend the well-known Bahl-Cocke-Jelinek-Raviv (BCJR) construction for conventional trellises to tail-biting trellises. The BCJR-like labeling scheme yields a simple specification for the tail-biting trellis for the dual code, with the dual trellis having the same state-complexity profile as that of the primal code . Finally, we show that the algebraic specification of Forney for state spaces of conventional trellises has a natural extension to tail-biting trellises View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Determination of the Local Weight Distribution of Binary Linear Block Codes

    Publication Year: 2006 , Page(s): 4444 - 4454
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (491 KB) |  | HTML iconHTML  

    Some methods to determine the local weight distribution of binary linear codes are presented. Two approaches are studied: A computational approach and a theoretical approach. For the computational approach, an algorithm for computing the local weight distribution of codes using the automorphism group of the codes is devised. In this algorithm, a code is considered the set of cosets of a subcode, and the set of cosets is partitioned into equivalence classes. Thus, only the weight distributions of zero neighbors for each representative coset of equivalence classes are computed. For the theoretical approach, relations between the local weight distribution of a code, its extended code, and its even weight subcode are studied. As a result, the local weight distributions of some of the extended primitive Bose-Chaudhuri-Hocquenghen (BCH) codes, Reed-Muller codes, primitive BCH codes, punctured Reed-Muller codes, and even weight subcodes of primitive BCH codes and punctured Reed-Muller codes are determined View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weights Modulo a Prime Power in Divisible Codes and a Related Bound

    Publication Year: 2006 , Page(s): 4455 - 4463
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (218 KB) |  | HTML iconHTML  

    In this paper, we generalize the theorem given by R. M. Wilson about weights modulo pt in linear codes to a divisible code version. Using a similar idea, we give an upper bound for the dimension of a divisible code by some divisibility property of its weight enumerator modulo pe. We also prove that this bound implies Ward's bound for divisible codes. Moreover, we see that in some cases, our bound gives better results than Ward's bound View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-Varying Maximum Transition Run Constraints

    Publication Year: 2006 , Page(s): 4464 - 4480
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    Maximum transition run (MTR) constrained systems are used to improve detection performance in storage channels. Recently, there has been a growing interest in time-varying MTR (TMTR) systems, after such codes were observed to eliminate certain error events and thus provide high coding gain for EnPR4 channels for n=2,3. In this work, TMTR constraints parameterized by a vector, whose coordinates specify periodically the maximum runlengths of 1's ending at the positions, are investigated. A canonical way to classify such constraints and simplify their minimal graph presentations is introduced. It is shown that there is a particularly simple presentation for a special class of TMTR constraints and explicit descriptions of their characteristic equations are derived. New upper bounds on the capacity of TMTR constraints are established, and an explicit linear ordering by capacity of all tight TMTR constraints up to period 4 is given. For MTR constrained systems with unconstrained positions, it is shown that the set of sequences restricted to the constrained positions yields a natural TMTR constraint. Using TMTR constraints, a new upper bound on the tradeoff function for MTR systems that relates the density of unconstrained positions to the maximum code rates is determined View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-Density Parity-Check Lattices: Construction and Decoding Analysis

    Publication Year: 2006 , Page(s): 4481 - 4495
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (963 KB) |  | HTML iconHTML  

    Low-density parity-check codes (LDPC) can have an impressive performance under iterative decoding algorithms. In this paper we introduce a method to construct high coding gain lattices with low decoding complexity based on LDPC codes. To construct such lattices we apply Construction D', due to Bos, Conway, and Sloane, to a set of parity checks defining a family of nested LDPC codes. For the decoding algorithm, we generalize the application of max-sum algorithm to the Tanner graph of lattices. Bounds on the decoding complexity are derived and our analysis shows that using LDPC codes results in low decoding complexity for the proposed lattices. The progressive edge growth (PEG) algorithm is then extended to construct a class of nested regular LDPC codes which are in turn used to generate low density parity check lattices. Using this approach, a class of two-level lattices is constructed. The performance of this class improves when the dimension increases and is within 3 dB of the Shannon limit for error probabilities of about 10-6. This is while the decoding complexity is still quite manageable even for dimensions of a few thousands View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Upper Bound on the Block Error Probability After Decoding Over the Erasure Channel

    Publication Year: 2006 , Page(s): 4496 - 4503
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB) |  | HTML iconHTML  

    Motivated by cryptographic applications, we derive a new upper bound on the block error probability after decoding over the erasure channel. The bound works for all linear codes and is in terms of the generalized Hamming weights. It turns out to be quite useful for Reed-Muller codes for which all the generalized Hamming weights are known whereas the full weight distribution is only partially known. For these codes, the error probability is related to the cryptographic notion of algebraic immunity. We use our bound to show that the algebraic immunity of a random balanced m-variable Boolean function is of order m/2(1-o(1)) with probability tending to 1 as m goes to infinity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Measurement-Based Admission Control Using Markov's Theory of Canonical Distributions

    Publication Year: 2006 , Page(s): 4504 - 4518
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    This paper presents models, algorithms and analysis for measurement-based admission control in network applications in which there is high uncertainty concerning source statistics. In the process it extends and unifies several recent approaches to admission control. A new class of algorithms is introduced based on results concerning Markov's canonical distributions. In addition, a new model is developed for the evolution of the number of flows in the admission control system. Performance evaluation is done through both analysis and simulation. Results show that the proposed algorithms minimize buffer-overflow probability among the class of all moment-consistent algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Necessary and Sufficient Condition for the Construction of 2-to-1 Optical FIFO Multiplexers by a Single Crossbar Switch and Fiber Delay Lines

    Publication Year: 2006 , Page(s): 4519 - 4531
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (347 KB) |  | HTML iconHTML  

    In this paper, we prove a necessary and sufficient condition for the construction of 2-to-1 optical buffered first-in-first-out (FIFO) multiplexers by a single crossbar switch and fiber delay lines. We consider a feedback system consisting of an (M+2)times(M+2) crossbar switch and M fiber delay lines with delays d1,d2,...,dM. These M fiber delay lines are connected from M outputs of the crossbar switch back to M inputs of the switch, leaving two inputs (respectively, two outputs) of the switch for the two inputs (respectively, two outputs) of the 2-to-1 multiplexer. The main contribution of this paper is the formal proof that d1=1 and di les di+1 les 2d i, i=1,2,...,M-1, is a necessary and sufficient condition on the delays d1,d2,...,dM for such a feedback system to be operated as a 2-to-1 FIFO multiplexer with buffer Sigmai=1 Mdi under a simple packet routing policy. Specifically, the routing of a packet is according to a specific decomposition of the packet delay, called the C- transform in this paper. Our result shows that under such a feedback architecture a 2-to-1 FIFO multiplexer can be constructed with M=O(log B), where B is the buffer size. Therefore, our construction improves on a more complicated construction recently proposed by Sarwate and Anantharam that requires M=O(radicB) under the same feedback architecture (we note that their design is more general and works for priority queues) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Competitive Estimation With Signal and Noise Covariance Uncertainties

    Publication Year: 2006 , Page(s): 4532 - 4547
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    Robust estimation of a random vector in a linear model in the presence of model uncertainties has been studied in several recent works. While previous methods considered the case in which the uncertainty is in the signal covariance, and possibly the model matrix, but the noise covariance is assumed to be completely specified, here we extend the results to the case where the noise statistics may also be subjected to uncertainties. We propose several different approaches to robust estimation, which differ in their assumptions on the given statistics. In the first method, we assume that the model matrix and both the signal and the noise covariance matrices are uncertain, and develop a minimax mean-squared error (MSE) estimator that minimizes the worst case MSE in the region of uncertainty. The second strategy assumes that the model matrix is given and tries to uniformly approach the performance of the linear minimum MSE estimator that knows the signal and noise covariances by minimizing a worst case regret measure. The regret is defined as the difference or ratio between the MSE attainable using a linear estimator, ignorant of the signal and noise covariances, and the minimum MSE possible when the statistics are known. As we show, earlier solutions follow directly from our more general results. However, the approach taken here in developing the robust estimators is considerably simpler than previous methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance of Reduced-Rank Equalization

    Publication Year: 2006 , Page(s): 4548 - 4562
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (759 KB) |  | HTML iconHTML  

    We evaluate the performance of reduced-rank equalizers for both single-input single-output (SISO) and multiple-input multiple-output (MIMO) frequency-selective channels. Each equalizer filter is constrained to lie in a Krylov subspace, and can be implemented as a reduced-rank multistage Wiener filter (MSWF). Both reduced-rank linear and decision-feedback equalizers (DFEs) are considered. Our results are asymptotic as the filter length goes to infinity. For SISO channels, the output mean-squared error (MSE) is expressed in terms of the moments of the channel spectrum. For MIMO channels, both successive and parallel interference cancellation are considered. The asymptotic performance in that case requires the computation of moments, which depend on shifted versions of the channel impulse response for different users. Those are also expressed in terms of the MIMO channel frequency response. Numerical results are presented, which show that near full-rank performance can be achieved with relatively low-rank equalizers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wideband Extended Range-Doppler Imaging and Waveform Design in the Presence of Clutter and Noise

    Publication Year: 2006 , Page(s): 4563 - 4580
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (905 KB) |  | HTML iconHTML  

    This paper presents a group-theoretic approach to address the wideband extended range-Doppler target imaging and design of clutter rejecting waveforms. An exact imaging method based on the inverse Fourier transform of the affine group is presented. A Wiener filter is designed in the affine group Fourier transform domain to minimize wideband clutter range-Doppler reflectivity. The Wiener filter is then used to form an operator to precondition transmitted waveforms to reject clutter. Alternatively, the imaging and clutter rejection methods are equivalently re-expressed to perform clutter suppression upon reception. These methods are coupled with noise suppression upon reception. Numerical simulations are performed to demonstrate the performance of the proposed approach. Our study shows that the framework introduced in this paper can address the joint design of receive and transmit processing, design of clutter rejecting waveforms, suppression of noise, and reduction of computational complexity in receive processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maiorana&#8211;McFarland Class: Degree Optimization and Algebraic Properties

    Publication Year: 2006 , Page(s): 4581 - 4594
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    In this paper, we consider a subclass of the Maiorana-McFarland class used in the design of resilient nonlinear Boolean functions. We show that these functions allow a simple modification so that resilient Boolean functions of maximum algebraic degree may be generated instead of suboptimized degree in the original class. Preserving a high-nonlinearity value immanent to the original construction method, together with the degree optimization gives in many cases functions with cryptographic properties superior to all previously known construction methods. This approach is then used to increase the algebraic degree of functions in the extended Maiorana-McFarland (MM) class (nonlinear resilient functions F:GF(2)n |rarrGF(2)m derived from linear codes). We also show that in the Boolean case, the same subclass seems not to have an optimized algebraic immunity, hence not providing a maximum resistance against algebraic attacks. A theoretical analysis of the algebraic properties of extended Maiorana-McFarland class indicates that this class of functions should be avoided as a filtering function in nonlinear combining generators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Eta Pairing Revisited

    Publication Year: 2006 , Page(s): 4595 - 4602
    Cited by:  Papers (58)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (297 KB) |  | HTML iconHTML  

    In this paper, we simplify and extend the Eta pairing, originally discovered in the setting of supersingular curves by Barreto , to ordinary curves. Furthermore, we show that by swapping the arguments of the Eta pairing, one obtains a very efficient algorithm resulting in a speed-up of a factor of around six over the usual Tate pairing, in the case of curves that have large security parameters, complex multiplication by an order of Qopf (radic-3), and when the trace of Frobenius is chosen to be suitably small. Other, more minor savings are obtained for more general curves View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Hardness of the Closest Vector Problem With Preprocessing Over \ell _\infty Norm

    Publication Year: 2006 , Page(s): 4603 - 4606
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (146 KB) |  | HTML iconHTML  

    We show that the closest vector problem with preprocessing (CVPP) over lscr infin norm (CVPPinfin) is NP-hard. The result is obtained by the reduction from the subset sum problem with preprocessing to CVPPinfin. The reduction also shows the NP-hardness of CVPinfin, which is much simpler than all previously known proofs. In addition, we also give a direct reduction from exact 3-sets cover problem to CVPPinfin View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Design of Low-Correlation Zone Sequence Sets

    Publication Year: 2006 , Page(s): 4607 - 4616
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    In this paper, we present several construction methods for low-correlation zone (LCZ) sequence sets. First, we propose a design scheme for binary LCZ sequence sets with parameters (2n+1-2,M,L,2). In this scheme, we can freely set the LCZ length L and the resulting LCZ sequence sets have the size M, which is almost optimal with respect to Tang, Fan, and Matsufuji bound. Second, given a q-ary LCZ sequence set with parameters (N,M,L,epsi) and even q, we construct another q-ary LCZ sequence set with parameters (2N,2M,L,2epsi) or (2N,2M,L-1,2epsi). Especially, the new set with parameters (2N,2M,L,2) can be optimal in terms of the set size if a q-ary optimal LCZ sequence set with parameters (N,M,L,1) is used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Meaningful Information

    Publication Year: 2006 , Page(s): 4617 - 4626
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (377 KB) |  | HTML iconHTML  

    The information in an individual finite object (like a binary string) is commonly measured by its Kolmogorov complexity. One can divide that information into two parts: the information accounting for the useful regularity present in the object and the information accounting for the remaining accidental information. There can be several ways (model classes) in which the regularity is expressed. Kolmogorov has proposed the model class of finite sets, generalized later to computable probability mass functions. The resulting theory, known as Algorithmic Statistics, analyzes the algorithmic sufficient statistic when the statistic is restricted to the given model class. However, the most general way to proceed is perhaps to express the useful information as a total recursive function. The resulting measure has been called the "sophistication" of the object. We develop the theory of recursive functions statistic, the maximum and minimum value, the existence of absolutely nonstochastic objects (that have maximal sophistication-all the information in them is meaningful and there is no residual randomness), determine its relation with the more restricted model classes of finite sets, and computable probability distributions, in particular with respect to the algorithmic (Kolmogorov) minimal sufficient statistic, the relation to the halting problem and further algorithmic properties View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zero-Error Source&#8211;Channel Coding With Side Information

    Publication Year: 2006 , Page(s): 4626 - 4629
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB) |  | HTML iconHTML  

    This correspondence presents a novel application of the theta function defined by Lovasz. The problem of coding for transmission of a source through a channel without error when the receiver has side information about the source is analyzed. Using properties of the Lovasz theta function, it is shown that separate source and channel coding is asymptotically suboptimal in general. By contrast, in the case of vanishingly small probability of error, separate source and channel coding is known to be asymptotically optimal. For the zero-error case, it is further shown that the joint coding gain can in fact be unbounded. Since separate coding simplifies code design and use, conditions on sources and channels for the optimality of separate coding are also derived View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consistency of the Unlimited BIC Context Tree Estimator

    Publication Year: 2006 , Page(s): 4630 - 4635
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    The Bayesian information criterion (BIC) and Krichevsky- Trofimov (KT) version of minimum description length (MDL) principle are popular in the study of model selection. For order estimation of Markov chains, both are known to be strongly consistent when there is an upper-bound on the order. In the unbounded case, the BIC is also known to be consistent, but the KT estimator is consistent only with a bound o(log n) on the order. For context trees, a flexible generalization of Markov models widely used in data processing, the problem is more complicated both in theory and practice, given the substantially higher number of possible candidate models. Imre Csiszar and Zsolt Talata proved the consistency of BIC and KT when the hypothetical tree depths are allowed to grow as o(log n). This correspondence proves that such a restriction is not necessary for finite context sources: the BIC context tree estimator is strongly consistent even if there is no constraint at all on the size of the chosen tree. Moreover, an algorithm computing the tree minimizing the BIC criterion among all context trees in linear time is provided View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering