By Topic

Information Theory, IEEE Transactions on

Issue 2 • Date March 1996

Filter Results

Displaying Results 1 - 25 of 37
  • Algebraic Function Fields and Codes [Book Reviews]

    Publication Year: 1996
    Save to Project icon | Request Permissions | PDF file iconPDF (402 KB)  
    Freely Available from IEEE
  • Group codes generated by finite reflection groups

    Publication Year: 1996 , Page(s): 519 - 528
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB)  

    Slepian-type group codes generated by finite Coxeter groups are considered. The resulting class of group codes is a generalization of the well-known permutation modulation codes of Slepian (1965), it is shown that a restricted initial-point problem for these codes has a canonical solution that can easily be computed. This allows one to enumerate all optimal group codes in this restricted sense and essentially solves the initial-point problem for all finite reflection groups. Formulas for the cardinality and the minimum distance of such codes are given. The new optimal group codes from exceptional reflection groups that are obtained achieve high rates and have excellent distance properties. The decoding regions for maximum-likelihood (ML) decoding are explicitly characterized and an efficient ML-decoding algorithm is presented. This algorithm relies on an extension of Slepian's decoding of permutation modulation and has similar low complexity, View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Irregular sampling for spline wavelet subspaces

    Publication Year: 1996 , Page(s): 623 - 627
    Cited by:  Papers (25)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    Spline wavelets ψm(t) are important in time-frequency localization due to (i) ψm can be arbitrarily close to the optimal case as m is sufficiently large, (ii) ψm has compact support and simple analytic expression, which lead to effective computation. Although the spline wavelet subspaces are so simple, Walter's well-known sampling theorem does not hold if the order of spline m is even. Moreover, when irregular sampling is considered in these spaces, it is hard to determine the sampling density, which is a serious problem in applications, in this correspondence, a general sampling theorem is obtained for m⩾3 in the sense of iterative construction and the sampling density δm is estimated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Good lattice constellations for both Rayleigh fading and Gaussian channels

    Publication Year: 1996 , Page(s): 502 - 518
    Cited by:  Papers (111)  |  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1384 KB)  

    Recent work on lattices matched to the Rayleigh fading channel has shown how to construct good signal constellations with high spectral efficiency. We present a new family of lattice constellations, based on complex algebraic number fields, which have good performance on Rayleigh fading channels. Some of these lattices also present a reasonable packing density and thus may be used at the same time over a Gaussian channel. Conversely, we show that particular versions of the best lattice packings (D4, E6, E8, K12 , Λ16, Λ24), constructed from totally complex algebraic cyclotomic fields, present better performance over the Rayleigh fading channel. The practical interest in such signal constellations rises from the need to transmit information at high rates over both terrestrial and satellite links. Some further results in algebraic number theory related to ideals and their factorization are presented and the decoding algorithm used with these lattice constellations are illustrated together with practical results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification with finite memory

    Publication Year: 1996 , Page(s): 337 - 347
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    Consider the following situation. A device called a classifier observes a probability law P on l-vectors from an alphabet of size A. Its task is to observe a second probability law Q and decide whether P≡Q or P and Q are sufficiently different according to some appropriate criterion. If the classifier has available an unlimited memory (so that it can remember P(z) exactly for all z), this is a simple matter. In fact for most differentness criteria, a finite memory of 2(log A)l+o(l) bits will suffice (for large l), i.e., store a finite approximation of P(z) for all Alz's. In a sense made precise in this paper, it is shown that a memory of only about 2Rl bits is required, where the quantity R<log A, and is closely related to the entropy of P. Further, it is shown that if instead of being given P(z), for all z, the classifier is given a training sequence drawn with a probability law P that can be stored using about 2Rl bits, then correct classification is also possible View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the cardinality of systematic authentication codes via error-correcting codes

    Publication Year: 1996 , Page(s): 566 - 578
    Cited by:  Papers (8)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1084 KB)  

    In both open and private communication the participants face potential threats from a malicious enemy who has access to the communication channel and can insert messages (impersonation attack) or alter already transmitted messages (substitution attack). Authentication codes (A-codes) have been developed to provide protection against these threats. In this paper we introduce a new distance, called the authentication distance (A-distance), and show that an A-code can be described as a code for the A-distance. The A-distance is directly related to the probability PS of success in a substitution attack. We show how to transform an error-correcting code into an A-code and vice versa. We further use these transformations to provide both upper and lower bounds on the size of the information to be authenticated, and study their asymptotic behavior. As examples of obtained results, we prove that the cardinality of the source state space grows exponentially with the number of keys provided PS>PI, we generalize the square-root bound given by Gilbert, MacWilliams, and Sloane in 1979, and we provide very efficient constructions using concatenated Reed-Solomon codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approaching capacity of a continuous channel by discrete input distributions

    Publication Year: 1996 , Page(s): 671 - 675
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (524 KB)  

    In this paper memoryless channels with general alphabets and input constraint are considered. Sufficient conditions are given that channel capacity can be approached by discrete input distributions or by uniform input distributions with finite support. As an example, the additive white Gaussian noise channel is considered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic processes that generate polygonal and related random fields

    Publication Year: 1996 , Page(s): 606 - 617
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB)  

    A reversible, ergodic, Markov process taking values in the space of polygonally segmented images is constructed. The stationary distribution of this process can be made to correspond to a Gibbs-type distribution for polygonal random fields as introduced by Arak and Surgailis (1989) and a few variants thereof, such as those arising in Bayesian analysis of random fields. Extensions to generalized polygonal random fields are presented where the segmentation boundaries are not necessarily straight line segments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the cost of finite block length in quantizing unbounded memoryless sources

    Publication Year: 1996 , Page(s): 480 - 487
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB)  

    The problem of fixed-rate block quantization of an unbounded real memoryless source is studied. It is proved that if the source has a finite sixth moment, then there exists a sequence of quantizers Qn of increasing dimension n and fixed rate R such that the mean squared distortion Δ(Qn) is bounded as Δ(Qn )⩽D(R)+O(√(log n/n)), where D(R) is the distortion-rate function of the source. Applications of this result include the evaluation of the distortion redundancy of fixed-rate universal quantizers, and the generalization to the non-Gaussian case of a result of Wyner on the transmission of a quantized Gaussian source over a memoryless channel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complex Hadamard matrices related to Bent sequences

    Publication Year: 1996
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (96 KB)  

    Hadamard matrices are often used for some applications, such as error-correcting codes and spread sequences. This article gives the construction of complex Hadamard matrices of order pn, where p is prime and n is even. The complex Hadamard matrices include bi-phase Hadamard matrices whose elements take {-1, +1}, and four-phase Hadamard matrices whose elements take {±1, ±j} with j=√(-1) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotic distribution of the errors in scalar and vector quantizers

    Publication Year: 1996 , Page(s): 446 - 460
    Cited by:  Papers (22)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1268 KB)  

    High-rate (or asymptotic) quantization theory has found formulas for the average squared length (more generally, the qth moment of the length) of the error produced by various scalar and vector quantizers with many quantization points. In contrast, this paper finds an asymptotic formula for the probability density of the length of the error and, in certain special cases, for the probability density of the multidimensional error vector, itself. The latter can be used to analyze the distortion of two-stage vector quantization. The former permits one to learn about the point density and cell shapes of a quantizer from a histogram of quantization error lengths. Histograms of the error lengths in simulations agree well with the derived formulas. Also presented are a number of properties of the error density, including the relationship between the error density, the point density, and the cell shapes, the fact that its qth moment equals Bennett's integral (a formula for the average distortion of a scalar or vector quantizer), and the fact that for stationary sources, the marginals of the multidimensional error density of an optimal vector quantizer with large dimension are approximately i.i.d. Gaussian View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MDS array codes with independent parity symbols

    Publication Year: 1996 , Page(s): 529 - 542
    Cited by:  Papers (60)  |  Patents (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1188 KB)  

    A new family of maximum distance separable (MDS) array codes is presented. The code arrays contain p information columns and r independent parity columns, each column consisting of p-1 bits, where p is a prime. We extend a previously known construction for the case r=2 to three and more parity columns. It is shown that when r=3 such extension is possible for any prime p. For larger values of r, we give necessary and sufficient conditions for our codes to be MDS, and then prove that if p belongs to a certain class of primes these conditions are satisfied up to r⩽8. One of the advantages of the new codes is that encoding and decoding may be accomplished using simple cyclic shifts and XOR operations on the columns of the code array. We develop efficient decoding procedures for the case of two- and three-column errors. This again extends the previously known results for the case of a single-column error. Another primary advantage of our codes is related to the problem of efficient information updates. We present upper and lower bounds on the average number of parity bits which have to be updated in an MDS code over GF (2m), following an update in a single information bit. This average number is of importance in many storage applications which require frequent updates of information. We show that the upper bound obtained from our codes is close to the lower bound and, most importantly, does not depend on the size of the code symbols View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weight enumerators of extremal singly-even [60,30,12] codes

    Publication Year: 1996 , Page(s): 658 - 659
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    Conway and Sloane (1990) have listed the possible weight enumerators for extremal self-dual codes up to length 72. In this correspondence, we construct extremal singly-even self-dual [60,30,12] codes whose weight enumerator does not appear in this list. In addition, we present the possible weight enumerators for extremal self-dual codes of length 60 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the asymptotic distribution of the errors in vector quantization

    Publication Year: 1996 , Page(s): 461 - 468
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB)  

    In a recent paper, Lee and Neuhoff (see ibid., vol.42, no.2, p446-60, 1996) found an asymptotic formula for the distribution of the length of the errors produced by a vector quantizer with many quantization points. This distribution depends on the source probability density, the quantizer point density, and the quantizer shape profile. (The latter characterizes the shapes of the quantization cells as a function of position.) The purpose of this paper is to give a rigorous derivation of this formula by identifying precise conditions under which it is shown that if a sequence of vector quantizers with a given dimension and an increasing number of points has “specific” point densities and “specific” shape profiles converging to a “model” point density and a “model” shape profile, respectively, then the distribution of the length of the quantization error, suitably normalized, converges to the aforementioned formula, with the model point density and the model shape profile substituted View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The ternary Golay code, the integers mod 9, and the Coxeter-Todd lattice

    Publication Year: 1996 , Page(s): 636 - 637
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    The 12-dimensional Coxeter-Todd lattice can be obtained by lifting the ternary Golay code to a code over the integers mod 9 and applying Construction A View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nearly optimal multiuser codes for the binary adder channel

    Publication Year: 1996 , Page(s): 387 - 398
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1028 KB)  

    Coding schemes for the T-user binary adder channel are investigated. Recursive constructions are given for two families of mixed-rate, multiuser codes. It is shown that these basic codes can be combined by time-sharing to yield codes approaching most rates in the T-user capacity region. In particular, the best codes constructed herein achieve a sum-rate, R1+...+RT, which is higher than all previously reported codes for almost every T and is within 0.547-bit-per-channel use of the information-theoretic limit. Extensions to a T-user, Q-frequency adder channel are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time discretization of continuous-time filters and smoothers for HMM parameter estimation

    Publication Year: 1996 , Page(s): 593 - 605
    Cited by:  Papers (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (980 KB)  

    In this paper we propose algorithms for parameter estimation of fast-sampled homogeneous Markov chains observed in white Gaussian noise. Our algorithms are obtained by the robust discretization of stochastic differential equations involved in the estimation of continuous-time hidden Markov models (HMM's) via the EM algorithm. We present two algorithms: the first is based on the robust discretization of continuous-time filters that were recently obtained by Elliott to estimate quantities used in the EM algorithm; the second is based on the discretization of continuous-time smoothers, yielding essentially the well-known Baum-Welch re-estimation equations. The smoothing formulas for continuous-time HMM's are new, and their derivation involves two-sided stochastic integrals. The choice of discretization results in equations which are identical to those obtained by deriving the results directly in discrete time. The filter-based EM algorithm has negligible memory requirements; indeed, independent of the number of observations. In comparison the smoother-based discrete-time EM algorithm requires the use of the forward-backward algorithm, which is a fixed-interval smoothing algorithm and has memory requirements proportional to the number of observations. On the other hand, the computational complexity of the filter-based EM algorithm is greater than that of the smoother-based scheme. However, the filters may be suitable for parallel implementation. Using computer simulations we compare the smoother-based and filter-based EM algorithms for HMM estimation. We provide also estimates for the discretization error View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spread-response precoding for communication over fading channels

    Publication Year: 1996 , Page(s): 488 - 501
    Cited by:  Papers (38)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1352 KB)  

    Interleaving is an important technique for improving the effectiveness of traditional error-correcting codes in data transmission systems that exhibit multipath fading. Such channels often arise in mobile wireless communications. We present an alternative to interleaving for such systems, which we term “spread-response precoding”. From the perspective of the coded symbol stream, spread-response precoding effectively transforms an arbitrary Rayleigh fading channel into a nonfading, simple white marginally Gaussian noise channel. Furthermore, spread-response precoding requires no additional power or bandwidth, and is attractive in terms of computational complexity, robustness, and delay considerations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Location-correcting codes

    Publication Year: 1996 , Page(s): 554 - 565
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1144 KB)  

    We study codes over GF(q) that can correct t channel errors assuming the error values are known. This is a counterpart to the well-known problem of erasure correction, where error values are found assuming the locations are known. The correction capabilities of these so-called t-location correcting codes (t-LCCs) are characterized by a new metric, the decomposability distance, which plays a role analogous to that of the Hamming metric in conventional error-correcting codes (ECCs). Based on the new metric, we present bounds on the parameters of t-LCCs that are counterparts to the classical Singleton, sphere packing and Gilbert-Varshamov bounds for ECCs. In particular, we show examples of perfect LCCs, and we study optimal (MDS-Like) LCCs that attain the Singleton-type bound on the redundancy. We show that these optimal codes are generally much shorter than their erasure (or conventional ECC) analogs. The length n of any t-LCC that attains the Singleton-type bound for t>1 is bounded from above by t+O(√(q)), compared to length q+1 which is attainable in the conventional ECC case. We show constructions of optimal t-LCCs for t∈{1, 2, n-2, n-1, n} that attain the asymptotic length upper bounds, and constructions for other values of t that are optimal, yet their lengths fall short of the upper bounds. The resulting asymptotic gap remains an open research problem. All the constructions presented can be efficiently decoded View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hashing of databases based on indirect observations of Hamming distances

    Publication Year: 1996 , Page(s): 664 - 671
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB)  

    We describe hashing of databases as a problem of information and coding theory. It is shown that the triangle inequality for the Hamming distances between binary vectors may essentially decrease the computational efforts of a search for a pattern in a database. Introduction of the Lee distance in the space, which consists of the Hamming distances, leads to a new metric space where the triangle inequality can be effectively used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A rate-splitting approach to the Gaussian multiple-access channel

    Publication Year: 1996 , Page(s): 364 - 375
    Cited by:  Papers (136)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1236 KB)  

    It is shown that any point in the capacity region of a Gaussian multiple-access channel is achievable by single-user coding without requiring synchronization among users, provided that each user “splits” data and signal into two parts. Based on this result, a new multiple-access technique called rate-splitting multiple accessing (RSMA) is proposed. RSMA is a code-division multiple-access scheme for the M-user Gaussian multiple-access channel for which the effort of finding the codes for the M users, of encoding, and of decoding is that of at most 2M-1 independent point-to-point Gaussian channels. The effects of bursty sources, multipath fading, and inter-cell interference are discussed and directions for further research are indicated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The effect of decision delay in finite-length decision feedback equalization

    Publication Year: 1996 , Page(s): 618 - 621
    Cited by:  Papers (49)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    In this correspondence we derive the finite-length, minimum mean-squared error decision feedback equalizer (MMSE-DFE). We include decision delay as an explicit parameter. Our derivation yields an algebraic interpretation of the effect of decision delay on DFE performance (measured by mean-squared error). It also allows the fast computation of the MMSE-DFE for several different values of both decision delay and the number of feedback taps. Our approach is especially useful for short filter lengths, when the decision delay can significantly affect DFE performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative decoding of binary block and convolutional codes

    Publication Year: 1996 , Page(s): 429 - 445
    Cited by:  Papers (957)  |  Patents (167)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1420 KB)  

    Iterative decoding of two-dimensional systematic convolutional codes has been termed “turbo” (de)coding. Using log-likelihood algebra, we show that any decoder can be used which accepts soft inputs-including a priori values-and delivers soft outputs that can be split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the log-likelihood domain are given not only for convolutional codes but also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 2/3. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of 10-4 the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generation of matrices for determining minimum distance and decoding of cyclic codes

    Publication Year: 1996 , Page(s): 653 - 657
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB)  

    A simple method based on Newton's identities and their extensions is presented for determining the actual minimum distance of cyclic codes. More significantly, it is shown that this method also provides a mechanism for generating the type of syndrome matrices needed by Feng and Tzeng's (see ibid., vol.40, p.1364-1374, Sept. 1994) new procedure for decoding cyclic and BCH codes up to their actual minimum distance. Two procedures for generating such matrices are given. With these procedures, we have generated syndrome matrices having only one class of conjugate syndromes on the minor diagonal for all binary cyclic codes of length n<63 and many codes of length 63⩽n⩽99. A listing of such syndrome matrices for selected codes of length n<63 is included. An interesting connection of the method presented to the shifting technique of van Lint (1986) and Wilson is also noted View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An algebraic procedure for decoding beyond eBCH

    Publication Year: 1996 , Page(s): 649 - 652
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    We present a new way to find all possible error patterns up to a given weight greater than the designed error correcting capability (e BCH). Possible error positions are localized by indicators. Our method can be seen as an extension of the step-by-step decoder introduced by Massey (1965) for BCH codes. We consider the decoding of binary cyclic codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering