By Topic

Information Theory, IEEE Transactions on

Issue 5 • Date Sep 1997

Filter Results

Displaying Results 1 - 25 of 46
  • The zero-guards algorithm for general minimum-distance decoding problems

    Page(s): 1655 - 1658
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    We present some properties of an improved version of the zero-neighbors algorithm-the zero-guards algorithm. These properties can be used to find a zero-guards. A new decoding procedure using a zero-guards is also given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Covering numbers for real-valued function classes

    Page(s): 1721 - 1724
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    We find tight upper and lower bounds on the growth rate for the covering numbers of functions of bounded variation in the L1 metric in terms of all the relevant constants. We also find upper and lower bounds on covering numbers for general function classes over the family of L1(dP) metrics in terms of a scale-sensitive combinatorial dimension of the function class View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some decomposable codes: the |a+x|b+x|a+b+x| construction

    Page(s): 1663 - 1667
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    Codes with decomposable structure allow the use of multistage decoding procedures to achieve suboptimum bounded-distance error performance with reduced decoding complexity. This correspondence presents some new decomposable codes, including a class of distance-8 codes, that are constructed based on the |a+x|b+x|a+b+x| construction method. Some existing best codes are shown to be decomposable and hence can be decoded with multistage decoding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New constant weight codes from linear permutation groups

    Page(s): 1623 - 1630
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    New constant weight codes are found by considering certain linear permutation groups. A code is obtained as a collection of orbits of words under such a group. This leads to a difficult optimization problem, where a stochastic search heuristic, tabu search, is used to find good solutions in a feasible amount of time. Nearly 40 new codes of length at most 28 are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multiuser approach to narrowband cellular communications

    Page(s): 1503 - 1517
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB)  

    We compare three receivers for coded narrowband transmission affected by fading and co-channel interference. The baseline receiver is based on conventional diversity reception with maximal-ratio combining. A multiuser approach allows us to derive a maximum-likelihood multiuser receiver and its reduced-complexity suboptimal version. Finally, a decorrelating diversity receiver, which seeks a tradeoff between performance and complexity, is studied. Closed-form performance parameters are derived for all the proposed receivers in the case of coded coherent PSK and independent frequency nonselective Rayleigh fading View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A practical method for approaching the channel capacity of constrained channels

    Page(s): 1389 - 1399
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB)  

    A new coding technique is proposed that translates user information into a constrained sequence using very long codewords. Huge error propagation resulting from the use of long codewords is avoided by reversing the conventional hierarchy of the error control code and the constrained code. The new technique is exemplified by focusing on (d, k)-constrained codes. A storage-effective enumerative encoding scheme is proposed for translating user data into long dk sequences and vice versa. For dk runlength-limited codes, estimates are given of the relationship between coding efficiency versus encoder and decoder complexity. We show that for most common d, k values, a code rate of less than 0.5% below channel capacity can be obtained by using hardware mainly consisting of a ROM lookup table of size 1 kbyte. For selected values of d and k, the size of the lookup table is much smaller. The paper is concluded by an illustrative numerical example of a rate 256/466, (d=2, k=15) code, which provides a serviceable 10% increase in rate with respect to its traditional rate 1/2, (2, 7) counterpart View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cryptographically resilient functions

    Page(s): 1740 - 1747
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB)  

    This correspondence studies resilient functions which have applications in fault-tolerant distributed computing, quantum cryptographic key distribution, and random sequence generation for stream ciphers. We present a number of new methods for synthesizing resilient functions. An interesting aspect of these methods is that they are applicable both to linear and nonlinear resilient functions. Our second major contribution is to show that every linear resilient function can be transformed into a large number of nonlinear resilient functions with the same parameters. As a result, we obtain resilient functions that are highly nonlinear and have a high algebraic degree View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fixed-slope universal lossy data compression

    Page(s): 1465 - 1476
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB)  

    Corresponding to any lossless codeword length function l, three universal lossy data compression schemes are presented: one is with a fixed rate, another is with a fixed distortion, and a third is with a fixed slope. The former two universal lossy data compression schemes are the generalization of Yang-Kieffer's (see ibid., vol.42, no.1, p.239-45, 1995) results to the general case of any lossless codeword length function l, whereas the third is new. In the case of fixed-slope λ>0, our universal lossy data compression scheme works as follows: for any source sequence xn of length n, the encoder first searches for a reproduction sequence yn of length n which minimizes a cost function n-1l(yn)+λρn(xn, yn) over all reproduction sequences of length n, and then encodes xn into the binary codeword of length l(yn) associated with yn via the lossless codeword length function l, where ρn(xn, yn) is the distortion per sample between xn and yn. Under some mild assumptions on the lossless codeword length function l, it is shown that when this fixed-slope data compression scheme is applied to encode a stationary, ergodic source, the resulting encoding rate per sample and the distortion per sample converge with probability one to Rλ and Dλ, respectively, where (Dλ, Rλ) is the point on the rate distortion curve at which the slope of the rate distortion function is -λ. This result holds particularly for the arithmetic codeword length function and Lempel-Ziv codeword length function. The main advantage of this fixed-slope universal lossy data compression scheme over the fixed-rate (fixed-distortion) universal lossy data compression scheme lies in the fact that it converts the encoding problem to a search problem through a trellis and then permits one to use some sequential search algorithms to implement it. Simulation results show that this fixed-slope universal lossy data compression scheme, combined with a suitable search algorithm, is promising View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New single asymmetric error-correcting codes

    Page(s): 1619 - 1623
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    New single asymmetric error-correcting codes are proposed. These codes are better than existing codes when the code length n is greater than 10, except for n=12 and n=15. In many cases one can construct a code C containing at least [2n/n] codewords. It is known that a code with |C|⩾[2n/(n+1)] can be easily obtained. It should be noted that the proposed codes for n=12 and n=15 are also the best known codes that can be explicitly constructed, since the best of the existing codes for these values of n are based on combinatorial arguments. Useful partitions of binary vectors are also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Diamond codes

    Page(s): 1400 - 1411
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB)  

    A Diamond code is an error-correcting code obtained from two component codes. As in a product code, any symbol in a word of a Diamond code is checked by both component codes. However, the “code directions” for the component codes have been selected to minimize the memory that is required between successive decoding stages for the component codes. Diamond codes combine the error correcting power of a product code with the reduced memory requirements of the cross interleaved Reed-Solomon code (CIRC), applied in the compact disk system. We discuss encoding, decoding, and minimum distance properties of Diamond codes. Variations on the Diamond code construction are proposed that result in codes that are suited for use in rewritable block-oriented applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On an approximate eigenvector associated with a modulation code

    Page(s): 1672 - 1678
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    Let S be a constrained system of finite type, described in terms of a labeled graph M of finite type. Furthermore, let C be an irreducible constrained system of finite type, consisting of the collection of possible code sequences of some finite-state-encodable, sliding-block-decodable modulation code for S. It is known that this code could then be obtained by state splitting, using a suitable approximate eigenvector. In this correspondence, we show that the collection of all approximate eigenvectors that could be used in such a construction of C contains a unique minimal element. Moreover, we show how to construct its linear span from knowledge of M and C only, thus providing a lower bound on the components of such vectors. For illustration we discuss an example showing that sometimes arbitrary large approximate eigenvectors are required to obtain the best code (in terms of decoding-window size) although a small vector is also available View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New good quasi-cyclic ternary and quaternary linear codes

    Page(s): 1647 - 1650
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB)  

    Let [n,k,d;q]-codes be linear codes of length n, dimension k and minimum Hamming distance d over GF(q). The following quasi-cyclic codes are constructed in this paper: [44,11,20;3], [55,11:26:3], [66,11,32;3], [48,12,21;3], [60,12,28;3], [56,13,24;3], [65,13,29;3], [56,14,23;3], [60,15,23;3], [64,16,25;3], [36,9,19;4], [90,9,55;4], [99,9,61;4], [30,10,14;4], [50,10,27;4], [55,10,30;4], [33,11,15;4], [44,11,22;4], [55,11,29;4], [36,12,16;4], [48,12,23;4], [60,12,31;4]. All of these codes have established or exceed the respective lower bounds on the minimum distance given by Brouwer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collision-type multiple-user communications

    Page(s): 1725 - 1736
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB)  

    A collision-type multiple access system is investigated in which every user transmits symbols from a common N-ary frequency-shift-keyed signal alphabet. We present a series of information-theoretic properties of the associated mathematical channel model. In the absence of noise, we calculate a large system approximation to the sum capacity, which is used to show that, in the limit, the combining of the multiple-accessing and coding results in no loss in capacity, compared to a fixed-allocation scheme. The presence of thermal noise or undetected users influences the capacity. Bounds on the capacity of the channel in the presence of thermal noise are calculated, as well as the capacity of the system in the presence of interfering users. Finally, a new iterative multiuser detector, the consensus decoder, is described and simulation performance results are shown. It is demonstrated that this decoder can operate to within approximately 70% of the channel capacity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic crisscross error correction

    Page(s): 1425 - 1438
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB)  

    The crisscross error model in data arrays is considered, where the corrupted symbols are confined to a prescribed number of rows or columns (or both). Under the additional assumption that the corrupted entries are uniformly distributed over the channel alphabet, and by allowing a small decoding error probability, a coding scheme is presented where the redundancy can get close to one half the redundancy required in minimum-distance decoding of crisscross errors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A probability-ratio approach to approximate binary arithmetic coding

    Page(s): 1658 - 1662
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    We describe an alternative mechanism for approximate binary arithmetic coding. The quantity that is approximated is the ratio between the probabilities of the two symbols. Analysis is given to show that the inefficiency so introduced is less than 0.7% on average; and in practice the compression loss is negligible View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonuniform sampling of bandlimited signals with polynomial growth on the real axis

    Page(s): 1717 - 1721
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    We derive a sampling expansion for bandlimited signals with polynomial growth on the real axis. The sampling expansion uses nonuniformly spaced sampling points. But unlike other known sampling expansions for such signals, ours converge uniformly to the signal on any compact set. An estimate of the truncation error of such a series is also obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Is code equivalence easy to decide?

    Page(s): 1602 - 1604
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB)  

    We study the computational difficulty of deciding whether two matrices generate equivalent linear codes, i.e., codes that consist of the same codewords up to a fixed permutation on the codeword coordinates. We call this problem code equivalence. Using techniques from the area of interactive proofs, we show on the one hand, that under the assumption that the polynomial-time hierarchy does not collapse, code equivalence is not NP-complete. On the other hand, we present a polynomial-time reduction from the graph isomorphism problem to code equivalence. Thus if one could find an efficient (i.e., polynomial-time) algorithm for code equivalence, then one could settle the long-standing problem of determining whether there is an efficient algorithm for solving graph isomorphism View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complementary reliability-based decodings of binary linear block codes

    Page(s): 1667 - 1672
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New extremal self-dual codes of lengths 42 and 44

    Page(s): 1607 - 1612
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    All extremal binary self-dual codes of lengths 42 and 44 which have an automorphism of order 5 with eight independent cycles are obtained up to equivalence. There are 109 inequivalent [42, 21, 8] codes with such an automorphism. All [44, 22, 8] codes that are obtained have 29 different weight enumerators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal multiplexing on a single link: delay and buffer requirements

    Page(s): 1518 - 1535
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (740 KB)  

    This paper is motivated by the need to provide per-session quality of service guarantees in fast packet-switched networks. We address the problem of characterizing and designing scheduling policies that are optimal in the sense of minimizing buffer and/or delay requirements under the assumption of commonly accepted traffic constraints. We investigate buffer requirements under three typical memory allocation mechanisms which represent tradeoffs between efficiency and complexity. For traffic with delay constraints we provide policies that are optimal in the sense of satisfying the constraints if they are satisfiable by any policy. We also investigate the tradeoff between delay and buffer optimality, and design policies that are “good” (optimal or close to) for both. Finally, we extend our results to the case of “soft” delay constraints and address the issue of designing policies that satisfy such constraints in a fair manner. Given our focus on packet switching, we mainly concern ourselves with nonpreemptive policies, but one class of nonpreemptive policies which we consider is based on tracking preemptive policies. This class is introduced and may be of interest in other applications as well View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Addendum to “Non-BCH triple-error-correcting codes”

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (24 KB)  

    For original paper see van der Vlugt (IEEE Trans. Inform. Theory, vol.42, p.1612-14, 1996). One of the subjects in the above-mentioned paper is the determination of the weight distribution of the dual code C of the binary cyclic codes C of length n=2m-1 with zeros α, α2(t-1+1), α(2t+1). Here m=2t+1 and α generate the multiplicative group of the finite field F2m. After the publication of van der Vlugt (1996), Tor Helleseth drew the present author's attention to the fact that Kasami's paper (1969) contains the key to an essentially different way of deriving the weight enumerator of C. Indeed, combining theorem 15 (ii)-1 in Kasami with the fact that the minimum distance dmin=7 (see MacWilliams and Sloane, 1983) also yields the weight distribution of C View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A lower bound on the undetected error probability and strictly optimal codes

    Page(s): 1489 - 1502
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (824 KB)  

    Error detection is a simple technique used in various communication and memory systems to enhance reliability. We study the probability that a q-ary (linear or nonlinear) block code of length n and size M fails to detect an error. A lower bound on this undetected error probability is derived in terms of q, n, and M. The new bound improves upon other bounds mentioned in the literature, even those that hold only for linear codes. Block codes whose undetected error probability equals the new lower bound are investigated. We call these codes strictly optimal codes and give a combinatorial characterization of them. We also present necessary and sufficient conditions for their existence. In particular, we find all values of n and M for which strictly optimal binary codes exist, and determine the structure of all of them. For example, we construct strictly optimal binary-coded decimal codes of length four and five, and we show that these are the only possible lengths of such codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tradeoff between source and channel coding

    Page(s): 1412 - 1424
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB)  

    A fundamental problem in the transmission of analog information across a noisy discrete channel is the choice of channel code rate that optimally allocates the available transmission rate between lossy source coding and block channel coding. We establish tight bounds on the channel code rate that minimizes the average distortion of a vector quantizer cascaded with a channel coder and a binary-symmetric channel. Analytic expressions are derived in two cases of interest: small bit-error probability and arbitrary source vector dimension; arbitrary bit-error probability and large source vector dimension. We demonstrate that the optimal channel code rate is often substantially smaller than the channel capacity, and obtain a noisy-channel version of the Zador (1982) high-resolution distortion formula View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Nordstrom-Robinson code is algebraic-geometric

    Page(s): 1588 - 1593
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    The techniques of algebraic geometry have been widely and successfully applied to the study of linear codes over finite fields since the early 1980s. There has also been an increased interest in the study of linear codes over finite rings. In a previous paper, we combined these two approaches to coding theory by introducing and studying algebraic-geometric codes over rings. We show that the Nordstrom-Robinson code is the image under the Gray mapping of an algebraic-geometric code over Z/4Z View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance of explicit error detection and threshold decision in decoding with erasures

    Page(s): 1650 - 1655
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    We study the performance of two schemes for decoding with erasures: threshold decision and decision by explicit error detection. We show that the latter scheme based on error-detection coding and maximum-likelihood decoding is at least as good as the former scheme, at least for binary symmetric channels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering