By Topic

Information Theory, IEEE Transactions on

Issue 6 • Date November 1987

Filter Results

Displaying Results 1 - 25 of 27
  • In Memorium: Yasuo Sugiyama

    Page(s): 757 - 758
    Save to Project icon | Request Permissions | PDF file iconPDF (280 KB)  
    Freely Available from IEEE
  • A lower bound on the redundancy of D -ary Huffman codes (Corresp.)

    Page(s): 910 - 911
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    A necessary and sufficient condition for the most likely letter of any discrete source to be coded by a single symbol with aD-ary Huffman code,2 leq D < infty, is derived. As a consequence, a lower bound on the redundancy of aD-ary Huffman code is established. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Encoding of correlated observations

    Page(s): 773 - 787
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2304 KB)  

    An important class of engineering problems involves sensing an environment and making estimates based on the phenomena sensed. In the traditional model of this problem, the sensors' observations are available to the estimator without alteration. There is .growing interest in {em distributed} sensing systems in which several observations are communicated to the estimator over channels of limited capacity. The observations must be separately encoded so that the target can be estimated with minimum distortion. Two questions are addressed for a special case of this problem wherein there are two sensors which observe noisy data and communicate with a single estimator: 1) if the encoder is unlimited in complexity, what communication rates and distortions can be achieved, 2) if the encoder must be a quantizer (a mapping of a single observation sample into a digital output), how can it be designed for good performance? The first question is treated by the techniques of information theory. It is proved that a given combination of rates and distortion is achievable if there exist degraded versions of the observations that satisfy certain formulas. The second question is treated by two approaches. In the first, the outputs of the quantizers undergo a second stage of encoding which exploits their correlation to reduce the output rate. Algorithms which design the second stage are presented and tested. The second approach is based on the {em distributional distance}, a measure of dissimilarity between two probability distributions. An algorithm to modify a quantizer for increased distributional distance is derived and tested. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concatenated group codes and their exponents

    Page(s): 849 - 854
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (944 KB)  

    Codes that are concatenations of group codes are considered. It is shown that whenGandHare finite groups and the inner and outer codes areG-andH-codes, respectively, then under certain conditions the concatenated code is aG times Hcode. A necessary and sufficient condition is given for aG times Hcode to have a structure as a concatenated code. Further, under the assumption that all group algebras involved are semisimple, it is shown how the character of a concatenated code can be expressed in terms of the characters of the inner and outer codes. This leads to an application of a result by Ward [5] which enables one to find (or lower bound) the exponent of the concatenated code by a computation on characters ofGandH. In an example this result enables the improvement of the usual minimum distance bound on concatenated codes. A general upper bound on the exponent of concatenated group codes is proved, and it is shown to be tight by an example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity of the mismatched Gaussian channel

    Page(s): 802 - 812
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1776 KB)  

    Information capacity is determined for the additive Gaussian channel when the constraint is given in terms of a covariance different from that of the channel noise. These results, combined with previous results on capacity when the constraint covariance is the same as the noise covariance, provide a complete and general solution for the information capacity of the Gaussian channel without feedback. The results are valid for both continuous-time and discrete-time channels and require only two assumptions: the noise is due to a stochastic process with sample paths having finite energy over the observation period (w.p.1), and the constraint is given in terms of a Hilbert space norm. Such a constraint is implicit in any constraint giving finite capacity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some probabilistic properties of the line-of-sight angle error to a remote object (Corresp.)

    Page(s): 938 - 942
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    Closed-form expressions are derived for the distribution and the expectation of the error of the line-of-sight angle to a remote object when the rectangular position coordinates are independent identically distributed normal variates. Generalized results also are provided both for the cumulative distribution function for a hyperspace ofNdimensions and for a spherically symmetric probability density function in three-dimensional space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the distribution of positive-definite Gaussian quadratic forms

    Page(s): 895 - 906
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1504 KB)  

    Quadratic signal processing is used in detection and estimation of random signals. To describe the performance of quadratic signal processing, the probability distribution of the output of the processor is needed. Only positive-definite Gaussian quadratic forms are considered. The quadratic form is diagonalized in terms of independent Gaussian variables and its mean, moment-generating function, and cumulants are computed; conditions are given for the quadratic form to bechi^{2}distributed and distributed like a sum of independent random variables having a Gamma distribution. A new method is proposed to approximate its probability distribution using an expansion in Laguerre polynomials for the central case and in generalizedchi^{2}distributions in the noncentral case. The series coefficients and bounds on truncation error are evaluated. Some applications in average power and power spectrum estimation and in detection illustrate our method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast ML decoding algorithm for the Nordstrom - Robinson code (Corresp.)

    Page(s): 931 - 933
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB)  

    The extended Nordstrom-Robinson code is an optimum nonlinear double-error-correcting code of length16with considerable practical importance. This code is also useful as a rate1/2vector quantizer for random waveforms such as speech linear-predictive-coding (LPC) residual. A fast decoding algorithm is described for maximum likelihood decoding (or nearest neighbor search in the squared-error sense) with304additions and128comparisons. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimax universal noiseless coding for unifilar and Markov sources (Corresp.)

    Page(s): 925 - 930
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1016 KB)  

    Constructive upper bounds are presented for minimax universal noiseless coding of unifilar sources without any ergodicity assumptionS. These bounds are obtained by quantizing the estimated probability distribution of source letters with respect to the relative entropy. They apply both to fixed-length to variable-length (FV) and variable-length to fixed-length (VF) codes. Unifilar sources are a generalization of the usual definition of Markov sources, so these results apply to Markov sources as well. These upper bounds agree asymptotically with the lower bounds given by Davisson for FV coding of stationary ergodic Markov sources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New bounds on binary linear codes of dimension eight (Corresp.)

    Page(s): 917 - 919
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    Letn(k,d)be the smallest integernsuch that a binary linear code of lengthn, dimensionk, and minimum distance at leastdexists. New results are given that improve the best previously known bounds onn(8,d). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The design of joint source and channel trellis waveform coders

    Page(s): 855 - 865
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1816 KB)  

    The generalized Lloyd algorithm is applied to the design of joint source and channel trellis waveform coders to encode discrete-time continuous-amplitude stationary and ergodic sources operating over discrete memoryless noisy channels. Experimental results are provided for independent and autoregressive Gaussian sources, binary symmetric channels, and absolute error and squared error distortion measures. Performance of the joint codes is compared with the tandem combination of a trellis source code and a trellis channel code on the independent Gaussian source using the squared error distortion measure operating over an additive white Gaussian noise channel. It is observed that the jointly optimized codes achieve performance close to or better than that of separately optimized tandem codes of the same constraint length. Performance improvement via a predictive joint source and channel trellis code is demonstrated for the autoregressive Gaussian source using the squared error distortion measure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Codes on the Klein quartic, ideals, and decoding (Corresp.)

    Page(s): 923 - 925
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    A sequence of codes with particular symmetries and with large rates compared to their minimal distances is constructed over the field GF(2^{3}). In the sequence there is, for instance, a code of length 21 and dimension10with minimal distance9, and a code of length21and dimension16with minimal distance3. The codes are constructed from algebraic geometry using the dictionary between coding theory and algebraic curves over finite fields established by Goppa. The curve used in the present work is the Klein quartic. This curve has the maximal number of rational points over GF(2^{3})allowed by Serre's improvement of the Hasse-Weil bound, which, together with the low genus, accounts for the good code parameters. The Klein quartic has the Frobenius groupGof order21acting as a group of automorphisms which accounts for the particular symmetries of the codes. In fact, the codes are given alternative descriptions as left ideals in the group-algebra GF(2^{3})[G]. This description allows for easy decoding. For instance, in the case of the single error correcting code of length21and dimension16with minimal distance3. decoding is obtained by multiplication with an idempotent in the group algebra. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The source coding theorem via Sanov's theorem (Corresp.)

    Page(s): 907 - 909
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB)  

    A proof of Shannon's source coding theorem is given using results from large deviation theory. In particular Sanov's theorem on convergence rates for empirical distributions is invoked to obtain the key large deviation result. This result is used directly to prove the source coding theorem for discrete memoryless sources. It is then shown how this theorem can be extended to ergodic Polish space valued sources and continuous distortion measures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conditional limit theorems under Markov conditioning

    Page(s): 788 - 801
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2088 KB)  

    LetX_{1},X_{2},cdotsbe independent identically distributed random variables taking values in a finite setXand consider the conditional joint distribution of the first m elements of the sampleX_{1},cdots , X_{n}on the condition thatX_{1}=x_{1}and the sliding block sample average of a functionh(cdot , cdot)defined onX^{2}exceeds a thresholdalpha > Eh(X_{1}, X_{2}). Formfixed andn rightarrow infty, this conditional joint distribution is shown to converge m them-step joint distribution of a Markov chain started inx_{1}which is closest toX_{l}, X_{2}, cdotsin Kullback-Leibler information divergence among all Markov chains whose two-dimensional stationary distributionP(cdot , cdot)satisfiessum P(x, y)h(x, y)geq alpha, provided some distributionPonX_{2}having equal marginals does satisfy this constraint with strict inequality. Similar conditional limit theorems are obtained whenX_{1}, X_{2},cdotsis an arbitrary finite-order Markov chain and more general conditioning is allowed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Locating the maximum of a simple random sequence by sequential search

    Page(s): 877 - 881
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    Consider a stationary Gaussian process withEX_{i}X_{j}=a^{|i-j|}where0 < a < 1,nd let0 < r < 1. It is shown that to locate the maximum ofX_{l}, X_{2}, cdots, X_{N}for largeNwith probabilityr, roughly-rN log a/loglog Nobservations at sequentially determined locations are both sufficient and necessary. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combinatorial structure and capacity of the permuting relay channel

    Page(s): 813 - 826
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1472 KB)  

    Blackwell's trap-door channel is an interesting example of a finite state channel. Its deterministic version, that is, the permuting channel, has been studied by Ahlswede and Kaspi ina multiterminal information-theoretic framework. They determined the capacities of permuting jammer channels and relay channels for some special cases. The capacity problem for permuting relay channels is completely solved. More specifically, when a is the cardinality of alphabet, andbetais the number of available storage locations in the channel, the capacityC_{R}(alpha, beta)of the permuting relay channel is given bylog lambda, wherelambdadenotes the maximum eigenvalue of a matrixQderived from the state-transition mechanism associated with the channel. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A lower bound on the mean square error of reduced-order estimators for nonlinear processes (Corresp.)

    Page(s): 942 - 943
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB)  

    A lower bound on the mean square error of any reduced-order estimator for a given nonlinear process, in continuous or discrete time, is derived. The bound can be calculated from any pair of lower and upper bounds on the optimal error covariance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On combined symbol-and-bit error-control [4,2] codes over {0,1}^8 to be used in the

    Page(s): 911 - 917
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1248 KB)  

    The construction, properties, and decoding of four nonequivalent[4,2]codes over the alphabet{0,1}^{8}are described. These codes are able to correct the following error patterns:1)error patterns containing one nonzero byte,2)error patterns containing up to three nonzero bits, and3)error patterns containing one byte erasure and at most one nonzero bit. In addition, all error patterns containing one byte erasure and two nonzero bits can be detected. These codes can be used in the View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On testing for immutability of codes (Corresp.)

    Page(s): 934 - 938
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (832 KB)  

    Immutable codes have the property that information recorded with them on write-once memories such as digital optical discs cannot be changed. Write-once memory permits changing a0into a1, but once a1is written it cannot be changed back into a0. Most commonly used codes do not have the property of immutability. After a very general definition of immutability is given, algorithms which test a given code for immutability are developed. Algorithms are presented for fixed-length codes as well as for variable-length codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gray code weighting system (Corresp.)

    Page(s): 930 - 931
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    Contrary to what has been believed, it is found that weights can be associated with different bit positions of the standard binary reflected Gray code. Therefore, two algorithms are devised for direct Gray-to-decimal and decimal-to-Gray conversions. These algorithms are much simpler than old algorithms which involve conversions to and from binary code and exclusive-OR operations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple-burst error-correcting cyclic product codes (Corresp.)

    Page(s): 919 - 923
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (944 KB)  

    LetCbe the cyclic product code ofpsingle parity check codes of relatively prime lengthsn_{1}, n_{2},cdots , n_{p} (n_{1} < n_{2} < cdots < n_{p}). It is proven thatCcan correct2^{P-2}+2^{p-3}-1bursts of lengthn_{1}, andlfloor(max{p+1, min{2^{p-s}+s-1,2^{p-s}+2^{p-s-1}}}-1)/2rfloorbursts of lengthn_{1}n_{2} cdots n_{s} (2leq s leq p-2). Forp=3this means thatCis double-burst-n_{1}-correcting. An efficient decoding algorithm is presented for this code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A list-type reduced-constraint generalization of the Viterbi algorithm

    Page(s): 866 - 876
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1680 KB)  

    The Viterbi algorithm (VA), an optimum decoding rule for aQ-ary trellis code of constraint lengthK, operates by taking the best survivor from each ofQ^{K-1}lists of candidates at each decoding step. A generalized VA (GVA) is proposed that makes comparisons on the basis of a label of lengthL(Lleq K). It selects, incorporating the notion of list decoding, theSbest survivors from each ofQ^{L-1}lists of candidates at each decoding step. Coding theorems for a discrete memoryless channel are proved for GVA decoding and shown to be natural generalizations of those for VA decoding. An example of intersymbol interference removal is given to illustrate the practical benefits that the GVA can provide. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hypothesis testing with multiterminal data compression

    Page(s): 759 - 772
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1784 KB)  

    The multiterminal hypothesis testingH: XYagainstH̄: X̄Ȳis considered whereX^{n} (X̄^{n})andY^{n} (Ȳ^{n})are separately encoded at ratesR_{1}andR_{2}, respectively. The problem is to determine the minimumbeta_{n}of the second kind of error probability, under the condition that the first kind of error probabilityalpha_{n} leq epsilonfor a prescribed0 < epsilon < 1. A good lower boundtheta_{L}(R_{1}, R_{2})on the power exponenttheta (R_{1}, R_{2},epsilon)= lim inf_{n rightarrow infty}(-1/n log beta_{n})is given and several interesting properties are revealed. The lower bound is tighter than that of Ahlswede and Csiszár. Furthermore, in the special case of testing against independence, this bound turns out to coincide with that given by them. The main arguments are devoted to the special case withR_{2} = inftycorresponding to full side information forY^{n}(Ȳ^{n}). In particular, the compact solution is established to the complete data compression cases, which are useful in statistics from the practical point of view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral estimation from nonconsecutive data

    Page(s): 889 - 894
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (736 KB)  

    A method is presented for determining the maximum entropy spectrumS(omega)of a process under the constraint that its autocorrelationR(m)is known for everymin a setDof nonconsecutive integers. The method involves a simple steepest ascent technique that is based entirely on Levinson's algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved error probability evaluation methods for direct detection optical communication systems

    Page(s): 839 - 848
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1384 KB)  

    The problem of average error probability evaluation for direct detection binary optical communications in the presence of avalanche gain, intersymbol interference, and colored additive Gaussian noise is considered. Tight new upper and lower bounds, together with a modified Gaussian quadrature rule based on approximate moments, are derived and evaluated. The bounds are found to be much tighter than the Chernoff bound though only slightly more complex to evaluate, and can be used as approximations to the error probability in most cases of practical interest. Taken together the new bounds and the modified Gaussian quadrature rule form a comprehensive set of performance evaluation tools offering a judicious balance between complexity and accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering