By Topic

Information Theory, IEEE Transactions on

Issue 6 • Date Jun 2002

Filter Results

Displaying Results 1 - 25 of 35
  • Scalar versus vector quantization: worst case analysis

    Page(s): 1393 - 1409
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (519 KB) |  | HTML iconHTML  

    We study the potential merits of vector quantization and show that there can be an arbitrary discrepancy between the worst case rates required for scalar and vector quantization. Specifically, we describe a random variable and a distortion measure where quantization of a single instance to within a given distortion requires an arbitrarily large number of bits in the worst case, but quantization of multiple independent instances to within the same distortion requires an arbitrarily small number of bits per instance in the worst case. We relate this discrepancy to expander graphs, representation- and cover-numbers of set systems, and a problem studied by Slepian, Wolf, and Wyner (1973) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Opportunistic beamforming using dumb antennas

    Page(s): 1277 - 1294
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    Multiuser diversity is a form of diversity inherent in a wireless network, provided by independent time-varying channels across the different users. The diversity benefit is exploited by tracking the channel fluctuations of the users and scheduling transmissions to users when their instantaneous channel quality is near the peak. The diversity gain increases with the dynamic range of the fluctuations and is thus limited in environments with little scattering and/or slow fading. In such environments, we propose the use of multiple transmit antennas to induce large and fast channel fluctuations so that multiuser diversity can still be exploited. The scheme can be interpreted as opportunistic beamforming and we show that true beamforming gains can be achieved when there are sufficient users, even though very limited channel feedback is needed. Furthermore, in a cellular system, the scheme plays an additional role of opportunistic nulling of the interference created on users of adjacent cells. We discuss the design implications of implementing. this scheme in a complete wireless system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Source coding, large deviations, and approximate pattern matching

    Page(s): 1590 - 1615
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB) |  | HTML iconHTML  

    We present a development of parts of rate-distortion theory and pattern-matching algorithms for lossy data compression, centered around a lossy version of the asymptotic equipartition property (AEP). This treatment closely parallels the corresponding development in lossless compression, a point of view that was advanced in an important paper of Wyner and Ziv in 1989. In the lossless case, we review how the AEP underlies the analysis of the Lempel-Ziv algorithm by viewing it as a random code and reducing it to the idealized Shannon code. This also provides information about the redundancy of the Lempel-Ziv algorithm and about the asymptotic behavior of several relevant quantities. In the lossy case, we give various versions of the statement of the generalized AEP and we outline the general methodology of its proof via large deviations. Its relationship with Barron (1985) and Orey's (1985, 1986) generalized AEP is also discussed. The lossy AEP is applied to (i) prove strengthened versions, of Shannon's(1948, 1974) direct source-coding theorem and universal coding theorems; (ii) characterize the performance of "mismatched" codebooks in lossy data compression; ( iii) analyze the performance of pattern-matching algorithms for lossy compression (including Lempel-Ziv schemes); and (iv) determine the first-order asymptotic of waiting times between stationary processes. A refinement to the lossy AEP is then presented, and it is used to (i) prove second-order (direct and converse) lossy source-coding theorems, including universal coding theorems; (ii) characterize which sources are quantitatively easier to compress; (iii) determine the second-order asymptotic of waiting times between stationary processes; and (iv) determine the precise asymptotic behavior of longest match-lengths between stationary processes. Finally, we discuss extensions of the above framework and results to random fields View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantum rate-distortion theory for memoryless sources

    Page(s): 1580 - 1589
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    We formulate quantum rate-distortion theory in the most general setting where classical side information is included in the tradeoff. Using a natural distortion measure based on entanglement fidelity and specializing to the case of an unrestricted classical side channel, we find the exact quantum rate-distortion function for a source of isotropic qubits. An upper bound we believe to be exact is found in the case of biased sources. We establish that in this scenario optimal rate-distortion codes produce no entropy exchange with the environment of any individual qubit View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint message-passing decoding of LDPC codes and partial-response channels

    Page(s): 1410 - 1422
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (483 KB) |  | HTML iconHTML  

    Ideas of message passing are applied to the problem of removing the effects of intersymbol interference (ISI) from partial-response channels. Both bit-based and state-based parallel message-passing algorithms are proposed. For a fixed number of iterations less than the block length, the bit-error rate of the state-based algorithm approaches a nonzero constant as the signal-to-noise ratio (SNR) approaches infinity. This limitation can be removed by using a precoder. It is well known that low-density parity-check (LDPC) codes can be decoded using a message-passing algorithm. Here, a single message-passing detector/decoder matched to the combination of a partial-response channel and an LDPC code is investigated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple-antennas and isotropically random unitary inputs: the received signal density in closed form

    Page(s): 1473 - 1484
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (486 KB) |  | HTML iconHTML  

    An important open problem in multiple-antenna communications theory is to compute the capacity of a wireless link subject to flat Rayleigh block-fading, with no channel-state information (CSI) available either to the transmitter or to the receiver. The isotropically random (i.r.) unitary matrix-having orthonormal columns, and a probability density that is invariant to premultiplication by an independent unitary matrix-plays a central role in the calculation of capacity and in some special cases happens to be capacity-achieving. We take an important step toward computing this capacity by obtaining, in closed form, the probability density of the received signal when transmitting i.r. unitary matrices. The technique is based on analytically computing the expectation of an exponential quadratic function of an i.r. unitary matrix and makes use of a Fourier integral representation of the constituent Dirac delta functions in the underlying density. Our formula for the received signal density enables us to evaluate the mutual information for any case of interest, something that could previously only be done for single transmit and receive antennas. Numerical results show that at high signal-to-noise ratio (SNR), the mutual information is maximized for M=min(N, T/2) transmit antennas, where N is the number of receive antennas and T is the length of the coherence interval, whereas at low SNR, the mutual information is maximized by allocating all transmit power to a single antenna View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral efficiency in the wideband regime

    Page(s): 1319 - 1343
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (766 KB) |  | HTML iconHTML  

    The tradeoff of spectral efficiency (b/s/Hz) versus energy-per-information bit is the key measure of channel capacity in the wideband power-limited regime. This paper finds the fundamental bandwidth-power tradeoff of a general class of channels in the wideband regime characterized by low, but nonzero, spectral efficiency and energy per bit close to the minimum value required for reliable communication. A new criterion for optimality of signaling in the wideband regime is proposed, which, in contrast to the traditional criterion, is meaningful for finite-bandwidth communication View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The pros and cons of democracy

    Page(s): 1721 - 1725
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB) |  | HTML iconHTML  

    We introduce the concept of "democracy," in which the individual bits in a coarsely quantized representation of a signal are all given "equal weight" in the approximation to the original signal. We prove that such democratic representations cannot achieve the same accuracy as optimal nondemocratic schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Writing sequences on the plane

    Page(s): 1344 - 1354
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB)  

    The problem of arranging two-dimensional arrays of data into one-dimensional sequences comes up in image processing, color quantization, and optical and magnetic data recording. A good arrangement should enable the one-dimensional sequences to be modeled as Markov chains or shifts of finite type. Since this is not possible in general, two-dimensional data is most commonly scanned by rows, columns, or diagonals. We look into three unusual ways to write a sequence,in the plane: by Penrose tilings, by space-filling curves, and by cylindrical and spiral lattices. We show how Penrose tilings can be used to record information and how some spiral lattices can be used for quantization of color spaces View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal composite hypothesis testing: a competitive minimax approach

    Page(s): 1504 - 1517
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (498 KB) |  | HTML iconHTML  

    A novel approach is presented for the long-standing problem of composite hypothesis testing. In composite hypothesis testing, unlike in simple hypothesis testing, the probability function of the observed data, given the hypothesis, is uncertain as it depends on the unknown value of some parameter. The proposed approach is to minimize the worst case ratio between the probability of error of a decision rule that is independent of the unknown parameters and the minimum probability of error attainable given the parameters. The principal solution to this minimax problem is presented and the resulting decision rule is discussed. Since the exact solution is, in general, hard to find, and a fortiori hard to implement, an approximation method that yields an asymptotically minimax decision rule is proposed. Finally, a variety of potential application areas are provided in signal processing and communications with special emphasis on universal decoding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A model for stock price fluctuations based on information

    Page(s): 1372 - 1378
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    The author presents a new model for stock price fluctuations based on a concept of "information." In contrast, the usual Black-Scholes-Merton-Samuelson (1965, 1973) model is based on the explicit assumption that information is uniformly held by everyone and plays no role in stock prices. The new model is based on the evident nonuniformity of information in the market and the evident time delay until new information becomes generally known. A second contribution of the paper is to present some problems with explicit solutions which are of value in obtaining insights. Several problems of mathematical interest are compared in order to better understand which optimal stopping problems have explicit solutions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gambling for the mnemonically impaired

    Page(s): 1379 - 1392
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    We obtain asymptotically tight bounds on the maximum amount of information that a single bit of memory can retain about the entire past. At each of n successive epochs, a single fair bit is generated and a one-bit memory is updated according to a family of memory update rules (possibly probabilistic and time-dependent) depending only on the value of the new input bit and on the current state of the memory. The problem is to estimate the supremum over all possible update rules of the minimum mutual information between the state of the memory at time (n + 1) and each of the previous n input bits. We show that this supremum is asymptotically equal to 1/(2n2 ln 2) bit, as conjectured by Venkatesh and Franklin (1991). We use this result to derive asymptotically sharp estimates of related maximin correlations between the memory and the input bits, thus resolving two more questions left open by Venkatesh and Franklin and by Komlos et al. (1993). Finally, we generalize the results to the case of an m-bit memory, again obtaining asymptotically tight bounds in many cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Duality between channel capacity and rate distortion with two-sided state information

    Page(s): 1629 - 1638
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB) |  | HTML iconHTML  

    We show that the duality between channel capacity and data compression is retained when state information is available to the sender, to the receiver, to both, or to neither. We present a unified theory for eight special cases of channel capacity and rate distortion with state information, which also extends existing results to arbitrary pairs of independent and identically distributed (i.i.d.) correlated state information (S1, S2) available at the sender and at the receiver, respectively. In particular, the resulting general formula for channel capacity C = maxp(u,x|s1) [I(U; S2, Y) I(U; S1)] assumes the same form as the generalized Wyner-Ziv (1976) rate distortion function R(D) = minp(u|x, s1)p(x&capped;|u, s2) [I(U; S 1, X) 1(U; S2)] View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal sequences for CDMA under colored noise: a Schur-saddle function property

    Page(s): 1295 - 1318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (770 KB) |  | HTML iconHTML  

    We consider direct sequence code division multiple access (DS-CDMA), modeling interference from users communicating with neighboring base stations by additive colored noise. We consider two types of receiver structures: first we consider the information-theoretically optimal receiver and use the sum capacity of the channel as our performance measure. Second, we consider the linear minimum mean square error (LMMSE) receiver and use the signal-to-interference ratio (SIR) of the estimate of the symbol transmitted as our performance measure. Our main result is a constructive characterization of the possible performance in both these scenarios. A central contribution of this characterization is the derivation of a qualitative feature of the optimal performance measure in both the scenarios studied. We show that the sum capacity is a saddle function: it is convex in the additive noise covariances and concave in the user received powers. In the linear receiver case, we show that the mini average power required to meet a set of target performance requirements of the users is a saddle function: it is convex in the additive noise covariances and concave in the set of performance requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nested linear/lattice codes for structured multiterminal binning

    Page(s): 1250 - 1276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB) |  | HTML iconHTML  

    Network information theory promises high gains over simple point-to-point communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner (1974, 1978) and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, previous work proposed the idea of nested codes, or more specifically, nested parity-check codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large-scale typicality of Markov sample paths and consistency of MDL order estimators

    Page(s): 1616 - 1628
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (549 KB) |  | HTML iconHTML  

    For Markov chains of arbitrary order, with finite alphabet A, almost sure sense limit theorems are proved on relative frequencies of k-blocks, and of symbols preceded by a given k-block, when k is permitted to grow as the sample size n grows. As-an application, the-consistency of two kinds of minimum description length (MDL) Markov order estimators is proved, with upper bound o(log n), respectively, α log n with α < 1/log |A|, on the permissible value of the estimated order. It was shown by Csiszar and Shields (see Ann. Statist., vol.28, p.1601-1619, 2000) that in the absence of any bound, or with bound α log n with large α consistency fails View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finite-length analysis of low-density parity-check codes on the binary erasure channel

    Page(s): 1570 - 1579
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (463 KB) |  | HTML iconHTML  

    In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Randomness, arrays, differences and duality

    Page(s): 1698 - 1703
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    Random variables that take on values in the finite field of q elements are considered. It is shown that joint distributions of such random variables are equivalently described by the individual distributions of their linear combinations. Random vectors X that are equally likely to take on any row of an arbitrary q-ary rectangular array as their value are treated extensively, together with the random vector ΔX defined as the difference between two independent versions of such a random vector. It is shown that linear combinations of exactly τ of the components of X are always biased toward 0. A quantitative measure βτ, of this bias is introduced and shown to be given by a sum of Krawtchouk polynomials. The vanishing of βτ is shown to be equivalent to the maximal randomness of linear combinations of exactly τ of the components of X as well as of ΔX. When the rows of the original array are the codewords of a q-ary linear code, then the bias βτ coincides with the number of codewords of Hamming weight τ in the dual code. The results of this article generalize certain well-known results such as the MacWilliams' (1977) identities and Delsarte's (1973) theorem on the significance of the "dual distance" of nonlinear codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal codes for finite sequences of integers drawn from a monotone distribution

    Page(s): 1713 - 1720
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (407 KB)  

    We offer two noiseless codes for blocks of integers Xn = (X1, ..., Xn). We provide explicit bounds on the relative redundancy that are valid for any distribution F in the class of memoryless sources with a possibly infinite alphabet whose marginal distribution is monotone. Specifically, we show that the expected code length L (Xn) of our first universal code is dominated by a linear function of the entropy of Xn. Further, we present a second universal code that is efficient in that its length is bounded by nHF + o(nHF), where HF is the entropy of F which is allowed to vary with n. Since these bounds hold for any n and any monotone F we are able to show that our codes are strongly minimax with respect to relative redundancy (as defined by Elias (1975)). Our proofs make use of the elegant inequality due to Aaron Wyner (1972) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cayley differential unitary space-time codes

    Page(s): 1485 - 1503
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (601 KB) |  | HTML iconHTML  

    One method for communicating with multiple antennas is to encode the transmitted data differentially using unitary matrices at the transmitter, and to decode differentially without knowing the channel coefficients at the receiver. Since channel knowledge is not required at the receiver, differential schemes are ideal for use on wireless links where channel tracking is undesirable or infeasible, either because of rapid changes in the channel characteristics or because of limited system resources. Although this basic principle is well understood, it is not known how to generate good-performing constellations of unitary matrices, for any number of transmit and receive antennas and for any rate. This is especially true at high rates where the constellations must be rapidly encoded and decoded. We propose a class of Cayley codes that works with any number of antennas, and has efficient encoding and decoding at any rate. The codes are named for their use of the Cayley transform, which maps the highly nonlinear Stiefel manifold of unitary matrices to the linear space of skew-Hermitian matrices. This transformation leads to a simple linear constellation structure in the Cayley transform domain and to an information-theoretic design criterion based on emulating a Cauchy random matrix. Moreover, the resulting Cayley codes allow polynomial-time near-maximum-likelihood (ML) decoding based on either successive nulling/canceling or sphere decoding. Simulations show that the Cayley codes allow efficient and effective high-rate data transmission in multiantenna communication systems without knowing the channel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Communicating via a processing broadcast satellite

    Page(s): 1243 - 1249
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    Three dependent users are physically separated but communicate with each other via a satellite. Each user generates data which it stores locally. In addition, each user sends a message to the satellite. The satellite processes the messages received from the users and broadcasts one common message to all three users. Each user must be capable of reconstructing the data of the other two users based upon the broadcast message and its own stored data. Our problem is to determine the minimum amount of information which must be transmitted to and from the satellite. The solution to this problem is obtained for the case where subsequent data triples that are produced by the users are independent and identically distributed. The three symbols within each triple are assumed to be dependent. Crucial for the solution is an achievability proof that involves cascaded Slepian-Wolf (1973) source coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Everlasting security in the bounded storage model

    Page(s): 1668 - 1680
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (530 KB) |  | HTML iconHTML  

    We address the problem of the-security of cryptographic protocols in face of future advances in computing technology and algorithmic research. The problem stems from the fact may be deemed that computations which at a given point in time may be deemed infeasible, can, in the course of years or decades, be made possible with improved hardware and/or breakthroughs in code-breaking algorithms. In such cases, the security of historical , but nonetheless highly confidential data may be in jeopardy. We present a scheme for efficient secure two-party communication with provable everlasting security. The security is guaranteed in face of any future technological advances, given the current state of of the art. Furthermore, the security of the messages is also guaranteed even if the secret encryption/decryption key is revealed in the future, The scheme is based on the bounded storage model and provides information-theoretic security in this model. The bounded storage model postulates an adversary who is computationally unbounded, and is only bounded in the amount of storage (not computation space) available to store the output of his computation. The bound on the storage can be arbitrarily large (e.g., 100 Tbytes), as long as it is fixed. Given this storage bound, our protocols guarantee that even a computationally all powerful adversary gains no information about a message (except with a probability that is exponentially small in the security parameter k). The bound on storage space need only hold at the time of the message transmission. Thereafter, no additional storage space or, computational power can help the adversary in deciphering the message. We present two protocols. The first protocol, which elaborates on the autoregressive (AR) protocol of Aumann and Rabin (see Advances in Cryptology-Crypto '99, p. 65-79, 1999), employs a short secret key whose size is independent of the length of the message, but uses many public random bits. The second protocol uses an optimal number of public random bits, but employs a longer secret key. Our proof of security utilizes a novel linear algebraic technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error exponents of expander codes

    Page(s): 1725 - 1729
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    We show that expander codes attain the capacity of the binary-symmetric channel under iterative decoding. The error probability has a positive exponent for all rates between zero and the channel capacity. The decoding complexity grows linearly with the code length View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient universal prediction algorithm for unknown sources with limited training data

    Page(s): 1690 - 1693
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    Inspired by C. E. Shannon's celebrated paper: "Prediction and entropy of printed English" (1951), we consider the optimal prediction error for unknown finite-alphabet ergodic Markov sources, for prediction algorithms that make inference about the most probable incoming letter, where the distribution of the unknown source is apparent only via a short training sequence of N + 1 letters. We allow N to be a polynomial function of K, the order of the Markov source, rather than the classical case where N is allowed to be exponential in K. A lower bound on the prediction error is formulated for such universal prediction algorithms, that are based on suffixes that were observed somewhere in the past "training sequence" X-N-1 (i.e. it is assumed that the universal predictor, given the past (N + 1)-sequence which serves as a training sequence is no better than the optimal predictor given only the longest suffix that appeared somewhere in the past X-N -1 vector). For a class of stationary Markov sources (which includes all Markov sources with positive transition probabilities), a particular universal predictor is introduced, and it is demonstrated that its performance is "optimal" in the sense that it yields a prediction error which is close to the lower bound on the universal prediction error, with limited training data. The results are nonasymptotic in the sense that they express the effect of limited training data on the efficiency of universal predictors. An asymptotically optimal universal predictor which is based on pattern matching appears elsewhere in the literature. However, the prediction error of these algorithms does not necessarily come close to the lower bound in the nonasymptotic region View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering