By Topic

Information Theory, IEEE Transactions on

Issue 3 • Date March 2003

Filter Results

Displaying Results 1 - 24 of 24
  • Correction to "Exact pairwise error probability of space-time codes"

    Publication Year: 2003 , Page(s): 766
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (165 KB) |  | HTML iconHTML  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Contributors

    Publication Year: 2003 , Page(s): 767 - 770
    Save to Project icon | Request Permissions | PDF file iconPDF (170 KB)  
    Freely Available from IEEE
  • The third support weight enumerators of the doubly-even, self-dual [32,16,8] codes

    Publication Year: 2003 , Page(s): 740 - 746
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (509 KB) |  | HTML iconHTML  

    We present combinatorial methods for computing the third support weight enumerators of the five doubly-even, self-dual [32,16,8] codes. The methods exploit relationships that exist between support weight enumerators and complete coset weight enumerators of a self-dual code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New extremal binary [44,22,8] codes

    Publication Year: 2003 , Page(s): 747 - 748
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    One of the problems in coding theory is constructing self-dual codes whose weight enumerators are not yet known to exist. We use the concept of neighbors and construct extremal binary self-dual [44,22,8] codes whose weight enumerators are not yet known to exist. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constructions of codes from number fields

    Publication Year: 2003 , Page(s): 594 - 603
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB) |  | HTML iconHTML  

    We define number-theoretic error-correcting codes based on algebraic number fields, thereby providing a generalization of Chinese remainder codes akin to the generalization of Reed-Solomon codes to algebraic-geometric codes. Our construction is very similar to (and in fact less general than) the one given by Lenstra (1986), but the parallel with the function field case is more apparent, since we only use the non-Archimedean places for the encoding. We prove that over an alphabet size as small as 19 there even exist asymptotically good number field codes of the type we consider. This result is based on the existence of certain number fields that have an infinite class field tower in which some primes of small norm split completely. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Partial period distribution of FCSR sequences

    Publication Year: 2003 , Page(s): 761 - 765
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    Klapper and Goresky (1995) introduced feedback with carry shift register (FCSR) and presented a significant kind of FCSR sequences, that is, l-sequences. They showed that the number of 0s and 1s occurring in one of their periods are equal. We discuss the partial period distribution of l-sequences, and show that when the periods become large, the proportion of 1s (resp., 0s) occurring in any of their partial periods approximates 50%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing proportionality for autoregressive processes

    Publication Year: 2003 , Page(s): 672 - 681
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (783 KB) |  | HTML iconHTML  

    We introduce a new hypothesis test to determine wether or not two autoregressive spectral densities are proportional. A test for autoregressive coefficient ity or randomness is deduced. We also derive the exact asymptotic behavior for these tests under parametric alternatives and show that, given a significance level, our tests are the most powerful (MP) tests among all tests. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The α-EM algorithm: surrogate likelihood maximization using α-logarithmic information measures

    Publication Year: 2003 , Page(s): 692 - 706
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB) |  | HTML iconHTML  

    A new likelihood maximization algorithm called the α-EM algorithm (α-expectation-maximization algorithm) is presented. This algorithm outperforms the traditional or logarithmic EM algorithm in terms of convergence speed for an appropriate range of the design parameter α. The log-EM algorithm is a special case corresponding to α=-1. The main idea behind the α-EM algorithm is to search for an effective surrogate function or a minorizer for the maximization of the observed data's likelihood ratio. The surrogate function adopted in this paper is based upon the α-logarithm which is related to the convex divergence. The convergence speed of the α-EM algorithm is theoretically analyzed through α-dependent update matrices and illustrated by numerical simulations. Finally, general guidelines for using the α-logarithmic methods are given. The choice of alternative surrogate functions is also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Source coding exponents for zero-delay coding with finite memory

    Publication Year: 2003 , Page(s): 609 - 625
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1307 KB) |  | HTML iconHTML  

    Fundamental limits on the source coding exponents (or large deviations performance) of zero-delay finite-memory (ZDFM) lossy source codes are studied. Our main results are the following. For any memoryless source, a suitably designed encoder that time-shares (at most two) memoryless scalar quantizers is as good as any time-varying fixed-rate ZDFM code, in that it can achieve the fastest exponential rate of decay for the probability of excess distortion. A dual result is shown to apply to the probability of excess code length, among all fixed-distortion ZDFM codes with variable rate. Finally, it is shown that if the scope is broadened to ZDFM codes with variable rate and variable distortion, then a time-invariant entropy-coded memoryless quantizer (without time sharing) is asymptotically optimal under a "fixed-slope" large-deviations criterion (introduced and motivated here in detail) corresponding to a linear combination of the code length and the distortion. These results also lead to single-letter characterizations for the source coding error exponents of ZDFM codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sequential greedy approximation for certain convex optimization problems

    Publication Year: 2003 , Page(s): 682 - 691
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (790 KB) |  | HTML iconHTML  

    A greedy algorithm for a class of convex optimization problems is presented. The algorithm is motivated from function approximation using a sparse combination of basis functions as well as some of its variants. We derive a bound on the rate of approximate minimization for this algorithm, and present examples of its application. Our analysis generalizes a number of earlier studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information-theoretic analysis of information hiding

    Publication Year: 2003 , Page(s): 563 - 593
    Cited by:  Papers (193)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2417 KB) |  | HTML iconHTML  

    An information-theoretic analysis of information hiding is presented, forming the theoretical basis for design of information-hiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermarking, fingerprinting, steganography, and data embedding. In these applications, information is hidden within a host data set and is to be reliably communicated to a receiver. The host data set is intentionally corrupted, but in a covert way, designed to be imperceptible to a casual analysis. Next, an attacker may seek to destroy this hidden information, and for this purpose, introduce additional distortion to the data set. Side information (in the form of cryptographic keys and/or information about the host signal) may be available to the information hider and to the decoder. We formalize these notions and evaluate the hiding capacity, which upper-bounds the rates of reliable transmission and quantifies the fundamental tradeoff between three quantities: the achievable information-hiding rates and the allowed distortion levels for the information hider and the attacker. The hiding capacity is the value of a game between the information hider and the attacker. The optimal attack strategy is the solution of a particular rate-distortion problem, and the optimal hiding strategy is the solution to a channel-coding problem. The hiding capacity is derived by extending the Gel'fand-Pinsker (1980) theory of communication with side information at the encoder. The extensions include the presence of distortion constraints, side information at the decoder, and unknown communication channel. Explicit formulas for capacity are given in several cases, including Bernoulli and Gaussian problems, as well as the important special case of small distortions. In some cases, including the last two above, the hiding capacity is the same whether or not the decoder knows the host data set. It is shown that many existing information-hiding systems in the literature operate far below capacity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed source coding using syndromes (DISCUS): design and construction

    Publication Year: 2003 , Page(s): 626 - 643
    Cited by:  Papers (266)  |  Patents (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1054 KB) |  | HTML iconHTML  

    We address the problem of compressing correlated distributed sources, i.e., correlated sources which are not co-located or which cannot cooperate to directly exploit their correlation. We consider the related problem of compressing a source which is correlated with another source that is available only at the decoder. This problem has been studied in the information theory literature under the name of the Slepian-Wolf (1973) source coding problem for the lossless coding case, and as "rate-distortion with side information" for the lossy coding case. We provide a constructive practical framework based on algebraic trellis codes dubbed as DIstributed Source Coding Using Syndromes (DISCUS), that can be applicable in a variety of settings. Simulation results are presented for source coding of independent and identically distributed (i.i.d.) Gaussian sources with side information available at the decoder in the form of a noisy version of the source to be coded. Our results reveal the promise of this approach: using trellis-based quantization and coset construction, the performance of the proposed approach is 2-5 dB from the Wyner-Ziv (1976) bound. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Existence and uniqueness of the solution for turbo decoding of parallel concatenated single parity check codes

    Publication Year: 2003 , Page(s): 722 - 725
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (315 KB) |  | HTML iconHTML  

    We consider turbo decoding of parallel concatenated single parity check (SPC) (K+1,K) codes, with row-column interleaving. The existence and uniqueness of the asymptotic probability density evaluated with the turbo algorithm is proved for every length K and every signal-to-noise ratio (SNR). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A nontrivial lower bound on the Shannon capacities of the complements of odd cycles

    Publication Year: 2003 , Page(s): 721 - 722
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (217 KB) |  | HTML iconHTML  

    This article contains a construction for independent sets in the powers of the complements of odd cycles. In particular, we show that α(C~2n+3(2n))≥2(2n)+1. It follows that for n≥0 we have Θ(C~2n+3)>2, where Θ(G) denotes the Shannon (1956) capacity of graph G. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity-approaching space-time codes for systems employing four transmitter antennas

    Publication Year: 2003 , Page(s): 726 - 732
    Cited by:  Papers (102)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    The design of space-time codes that are capable of approaching the capacity of multiple-input-single-output (MISO) antenna systems is a challenging problem, yet one of high practical importance. While a remarkably simple scheme of Alamouti (1998) is capable of attaining the channel capacity in the case of two-transmitter and one-receiver antennas (2,1), no such schemes are known for the case of more than two transmitter antennas. We propose a family of space-time codes that are especially designed for the case of four-transmitter antennas and that are shown to allow the attainment of a significant fraction of the open-loop Shannon capacity of the (4,1) channel. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Goppa-like bound on the trellis state complexity of algebraic-geometric codes

    Publication Year: 2003 , Page(s): 733 - 737
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (343 KB) |  | HTML iconHTML  

    For a linear code C of length n and dimension k, Wolf (1978) noticed that the trellis state complexity s(C) of C is upper-bounded by w(C):=min(k,n-k). We point out some new lower bounds for s(C). In particular, if C is an algebraic-geometric code, then s(C)≥w(C)-(g-a), where g is the genus of the underlying curve and a is the abundance of the code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A note on the equivalence between strict optical orthogonal codes and difference triangle sets

    Publication Year: 2003 , Page(s): 759 - 761
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB) |  | HTML iconHTML  

    Zhang (see IEEE Trans. Commun., vol.47, p.967-973, July 1999) proposed a special family of optical address codes, called strict optical orthogonal codes (S-OOCs), was proposed for fiber-optic code-division multiple-access (FO-CDMA) networks. Such codes can strictly guarantee both cross-correlation and autocorrelation functions constrained to have the value one in fully asynchronous data communications and ultra fast switching. In Zhang's work the theory and designs of S-OOC, plus several examples, comparison tables, and performance analyses were presented. In this article, we set up the equivalence between S-OOC and so-called difference triangle sets (DTS), which have been extensively studied previously. Thus, all the known constructions, bounds, and analyses for DTS can be directly applied to S-OOC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterate-averaging sign algorithms for adaptive filtering with applications to blind multiuser detection

    Publication Year: 2003 , Page(s): 657 - 671
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1095 KB) |  | HTML iconHTML  

    Motivated by the developments on iterate averaging of recursive stochastic approximation algorithms and asymptotic analysis of sign-error algorithms for adaptive filtering, this work develops two-stage sign algorithms for adaptive filtering. The proposed algorithms are based on constructions of a sequence of estimates using large step sizes followed by iterate averaging. Our main effort is devoted to improving the performance of the algorithms by establishing asymptotic normality of a suitably scaled sequence of the estimation errors. The asymptotic covariance is calculated and shown to be the smallest possible. Hence, the asymptotic efficiency or asymptotic optimality is obtained. Then variants of the algorithm including sign-regressor procedures and constant-step algorithms are studied. The minimal window width of averaging is also dealt with. Finally, iterate-averaging algorithms for blind multiuser detection in direct sequence/code-division multiple-access (DS/CDMA) systems are proposed and developed, and numerical examples are examined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complex-field coding for OFDM over fading wireless channels

    Publication Year: 2003 , Page(s): 707 - 720
    Cited by:  Papers (90)  |  Patents (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (832 KB) |  | HTML iconHTML  

    Orthogonal frequency-division multiplexing (OFDM) converts a time-dispersive channel into parallel subchannels, and thus facilitates equalization and (de)coding. But when the channel has s close to or on the fast Fourier transform (FFT) grid, uncoded OFDM faces serious symbol recovery problems. As an alternative to various error-control coding techniques that have been proposed to ameliorate the problem, we perform complex-field coding (CFC) before the symbols are multiplexed. We quantify the maximum achievable diversity order for independent and identically distributed (i.i.d.) or correlated Rayleigh-fading channels, and also provide design rules for achieving the maximum diversity order. The maximum coding gain is given, and the encoder enabling the maximum coding gain is also found. Simulated performance comparisons of CFC-OFDM with existing block and convolutionally coded OFDM alternatives favor CFC-OFDM for the code rates used in a HiperLAN2 experiment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cyclic codes over GR(4m) which are also cyclic over Z4

    Publication Year: 2003 , Page(s): 749 - 758
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (677 KB) |  | HTML iconHTML  

    Let GR(4m) be the Galois ring of characteristic 4 and cardinality 4m, and α_={α01,...,αm-1} be a basis of GR(4m) over Z4 when we regard GR(4m) as a free Z4-module of rank m. Define the map dα_ from GR(4m)[z]/(zn-1) into Z4[z]/(zmn-1) by dα_(a(z))=Σi=0m-1Σj=0n-1aijzmj+i where a(z)=Σj=0n-1ajzj and aji=0m-1aijαi, aij∈Z4. Then, for any linear code C of length n over GR(4m), its image dα_(C) is a Z4-linear code of length mn. In this article, for n and m being odd integers, it is determined all pairs (α_,C) such that dα_(C) is Z4-cyclic, where α_ is a basis of GR(4m) over Z4, and C is a cyclic code of length n over GR(4m). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the least covering radius of binary linear codes with small lengths

    Publication Year: 2003 , Page(s): 738 - 740
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (235 KB) |  | HTML iconHTML  

    Using classification of codes with a certain covering radius it is proved that the least covering radius t[17,6]=5; t[17,8]=4; t[18,7]=5; t[19,7]=5; t[20,8]=5; and t[21,7]=6. As a corollary, four improvements on the length function l(m,R) are found. It is also shown that there exists a unique[14,6] code with covering radius 3. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resilience properties of redundant expansions under additive noise and quantization

    Publication Year: 2003 , Page(s): 644 - 656
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (991 KB) |  | HTML iconHTML  

    Representing signals using coarsely quantized coefficients of redundant expansions is an interesting source coding paradigm, the most important practical case of which is oversampled analog-to-digital (A/D) conversion. Signal reconstruction from quantized redundant expansions and the accuracy of such representations are problems which are not well understood and we study them in this paper for uniform scalar quantization in finite-dimensional spaces. To give a more global perspective, we first present an analysis of the resilience of redundant expansions to degradation by additive noise in general, and then focus on the effects of uniform scalar quantization. The accuracy of signal representations obtained by applying uniform scalar quantization to coefficients of redundant expansions, measured as the mean-squared Euclidean norm of the reconstruction error, has been previously shown to be lower-bounded by an 1/r2 expression. We establish some general conditions under which the 1/r2 accuracy can actually be attained, and under those conditions prove a 1/r2 upper error bound. For a particular kind of structured expansions, which includes many popular frame classes, we propose reconstruction algorithms which attain the 1/r2 accuracy at low numerical complexity. These structured expansions, moreover, facilitate efficient encoding of quantized coefficients in a manner which requires only a logarithmic bit-rate increase in redundancy, resulting in an exponential error decay in the bit rate. Results presented in this paper are immediately applicable to oversampled A/D conversion of periodic bandlimited signals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complexity distortion theory

    Publication Year: 2003 , Page(s): 604 - 608
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB) |  | HTML iconHTML  

    Complexity distortion theory (CDT) is a mathematical framework providing a unifying perspective on media representation. The key component of this theory is the substitution of the decoder in Shannon's classical communication model with a universal Turing machine. Using this model, the mathematical framework for examining the efficiency of coding schemes is the algorithmic or Kolmogorov (1965) complexity. CDT extends this framework to include distortion by defining the complexity distortion function. We show that despite their different natures, CDT and rate distortion theory (RDT) predict asymptotically the same results, under stationary and ergodic assumptions. This closes the circle of representation models, from probabilistic models of information proposed by Shannon in information and rate distortion theories, to deterministic algorithmic models, proposed by Kolmogorov in Kolmogorov complexity theory and its extension to lossy source coding, CDT. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the error exponent and capacity games of private watermarking systems

    Publication Year: 2003 , Page(s): 537 - 562
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2689 KB) |  | HTML iconHTML  

    Watermarking systems are analyzed as a game between an information hider, a decoder, and an attacker. The information hider is allowed to cause some tolerable level of distortion to the original data within which the message is hidden, and the resulting distorted data can suffer some additional amount of distortion caused by an attacker who aims at erasing the message. Two games are investigated: the error exponent game and the coding capacity game. Motivated by a worst case approach, we assume that the attacker is informed of the hiding strategy taken by the information hider and the decoder, which are uninformed of the attacking scheme. This approach leads to the maximin error exponent and maximin coding capacity as objective functions. It is assumed that the host data is drawn from a finite-alphabet memoryless stationary source, and its realization (side information) is available at the encoder and the decoder. A single-letter expression for the maximin error exponent is found under large deviations distortion constraints. Moreover, we find an asymptotically optimal random coding distribution, a universal decoder, and a worst case attack channel. It is proved that there is a saddle point in the asymptotic exponent and that the minimax and the maximin error exponents are equal. Finally, a single letter expression for the coding capacity, i.e., the maximin reliable information rate, is found. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering