IEEE Transactions on Information Theory

Volume 37 Issue 4 • July 1991

Filter Results

Displaying Results 1 - 25 of 37
  • Flicker noise and the estimation of the Allan variance

    Publication Year: 1991, Page(s):1173 - 1177
    Cited by:  Papers (15)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (371 KB)

    Flicker noise is a random process observed in a variety of contexts, including current fluctuations in metal film and semiconductor devices, loudness fluctuations in speech and music, and neurological patterns. The quadratic-mean convergence of appropriate estimates of the Allan variance for flicker noise is established when the latter is modeled as a stochastic process with stationary increments.... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some contributions to a frequency location method due to He and Kedem

    Publication Year: 1991, Page(s):1177 - 1182
    Cited by:  Papers (18)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (613 KB)

    The author derives a useful information-theoretical technology from the tunable filter that automatically detects the frequency of an unknown signal in white Gaussian noise by adjusting its parameter to the correlation coefficient of its output. The tunable filter earlier proposed by S. He and B. Kedem (see ibid., vol.35, no.2, p.360-9, 1989) is known as the HK filter. The methodology sidesteps th... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Strengthening Simmons' bound on impersonation

    Publication Year: 1991, Page(s):1182 - 1185
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (409 KB)

    Simmons' lower bound on impersonation P/sub 1/>or=2/sup -I(M;E)/ where M and E denote the message and the encoding rule, respectively, is strengthened by maximizing over the source statistics and by allowing dependence between the message and the encoding rule. The authors show that a refinement of their argument, which removes the assumption of independence between E and the source state S, le... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional harmonic retrieval and its time-domain analysis technique

    Publication Year: 1991, Page(s):1185 - 1188
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (364 KB)

    The two-dimensional harmonic retrieval is examined in theory by confirming that the 2-D sinusoids in white noise are modeled as a special 2-D autoregressive moving average (ARMA) process whose AR parameters are identical to the MA ones. A new analysis technique for resolving 2-D sinusoids in white noise is proposed.<<ETX>> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zero-crossing rates of functions of Gaussian processes

    Publication Year: 1991, Page(s):1188 - 1194
    Cited by:  Papers (27)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (666 KB)

    Formulas for the expected zero-crossing rates of random processes that are monotone transformations of Gaussian processes can be obtained by using two different techniques. The first technique involves derivation of the expected zero-crossing rate for discrete-time processes and extends the result of the continuous-time case by using an appropriate limiting argument. The second is a direct method ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduced lists of error patterns for maximum likelihood soft decoding

    Publication Year: 1991, Page(s):1194 - 1200
    Cited by:  Papers (33)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (822 KB)

    A method whereby a substantially reduced family of error patterns, called survivors, may be created for maximum likelihood soft decoding is introduced. The family of survivor depends on the received word. A decoder based on this approach first forms the survivors, then scores them. Rather than obtaining the survivors by online elimination of error patterns, the use of predetermined lists that repr... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decoding binary 2-D cyclic codes by the 2-D Berlekamp-Massey algorithm

    Publication Year: 1991, Page(s):1200 - 1203
    Cited by:  Papers (20)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (463 KB)

    A method of decoding two-dimensional (2-D) cyclic codes by applying the 2-D Berlekamp-Massey algorithm is proposed. To explain this decoding method, the author introduces a subclass of 2-D cyclic codes, which are called 2-D BCH codes due to their similarity with BCH codes. It is shown that there are some short 2-D cyclic codes with a better cost parameter value. The merit of the approach is verifi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On (k, t)-subnormal covering codes

    Publication Year: 1991, Page(s):1203 - 1206
    Cited by:  Papers (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (452 KB)

    The concept of a (k, t)-subnormal covering code is defined. It is discussed how an amalgamated-direct-sumlike construction can be used to combine such codes. The existence of optimal (q, n, M) 1 codes C is discussed such that by puncturing the first coordinate of C one obtains a code with (q, 1)-subnorm 2.<<ETX>> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On extremal self-dual quaternary codes of lengths 18 to 28. II

    Publication Year: 1991, Page(s):1206 - 1216
    Cited by:  Papers (13)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1126 KB)

    For pt.I see ibid., vol.36, no.3, p.651-60 (1990). A general decomposition theorem is applied to find all extremal self-dual quaternary codes of lengths 18 to 28 that have a nontrivial monomial automorphism of order a power of 3. Techniques to distinguish these codes are also presented. The author presents situations in which the equivalence of the codes under consideration can be decided.<<... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A systolic Reed-Solomon encoder

    Publication Year: 1991, Page(s):1217 - 1220
    Cited by:  Papers (15)  |  Patents (8)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (333 KB)

    An architecture for a Reed-Solomon (RS) encoder is presented, consisting of r+1 systolic cells, where r is the redundancy of the code. The systolic encoder is systematic, does not contain any feedback or other global signals, its systolic cells are of low complexity, and it is easily reconfigurable for variable redundancy and changes in the choice of generator polynomial of the code. The encoding ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Explicit formulas for self-complementary normal bases in certain finite fields

    Publication Year: 1991, Page(s):1220 - 1222
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (227 KB)

    Explicit formulas are given for sets of p elements forming a self-complementary normal basis of GF(q/sup p/) over GF(q), where p is the characteristic of GF(q). Using these formulas, a straightforward construction of self-complementary bases for GF(q/sup alpha /) (where alpha =p/sup m/) over GF(q) is also presented.<<ETX>> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weight enumerators of self-dual codes

    Publication Year: 1991, Page(s):1222 - 1225
    Cited by:  Papers (48)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (407 KB)

    Some construction techniques for self-dual codes are investigated, and the authors construct a singly-even self-dual (48,24,10)-code with a weight enumerator that was not known to be attainable. It is shown that there exists a singly-even self-dual code C' of length n=48 and minimum weight d=10 whose weight enumerator is prescribed in the work of J.H. Conway et al. (see ibid., vol.36, no.5, p.1319... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on the redundancy of binary alphabetical codes

    Publication Year: 1991, Page(s):1225 - 1229
    Cited by:  Papers (20)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (503 KB)

    An alphabetical code is a code in which the numerical binary order of the codewords corresponds to the alphabetical order of the encoded symbols. A necessary and sufficient condition for the existence of a binary alphabetical code is presented. The redundancy of the optimum binary alphabetical code is given in comparison with the Huffman code and its upper bound, which is tighter than bounds previ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Index system and separability of constant weight Gray codes

    Publication Year: 1991, Page(s):1229 - 1233
    Cited by:  Papers (6)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (480 KB)

    A number system is developed for the conversion of natural numbers to the codewords of the Gray code G(n,k) of length n and weight k, and vice versa. The focus is on the subcode G(n,k) of G(n) consisting of those words of G(n) with precisely k 1-bits, 0<k<n. This code is called the constant weight Gray code of length n and weight k. As an application sharp lower and upper bounds are derived ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Note on 'The calculation of the probability of detection and the generalized Marcum Q-function'

    Publication Year: 1991
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (109 KB)

    The author presents corrections to his original paper (see ibid., vol.35, no.2, p.389-400, 1989). The corrections concern computational cases using the steepest descent integration technique. It is pointed out that, for certain specific parameter ranges, the calculation error is too large to be accounted for by accumulated round-off error.<<ETX>> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On multilevel block modulation codes

    Publication Year: 1991, Page(s):965 - 975
    Cited by:  Papers (63)  |  Patents (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1031 KB)

    The multilevel technique for combining block coding and modulation is investigated. A general formulation is presented for multilevel modulation codes in terms of component codes with appropriate distance measures. A specific method for constructing multilevel block modulation codes with interdependency among component codes is proposed. Given a multilevel block modulation code C with no interdepe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Limiting efficiencies of burst-correcting array codes

    Publication Year: 1991, Page(s):976 - 982
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (541 KB)

    The author evaluates the limiting efficiencies e(-S) of burst-correcting array codes A(n/sub 1/,n/sub 2/, -s) for all negative readouts -s as n/sub 2/ tends to infinity and n/sub 1/ is properly chosen to maximize the efficiency. Specializing the result to the products of the first i primes donated by s/sub i/ (1<or=i< infinity ), which are optimal choices for readouts, gives the expression e... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonality of binary codes derived from Reed-Solomon codes

    Publication Year: 1991, Page(s):983 - 994
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1024 KB)

    The author provides a simple method for determining the orthogonality of binary codes derived from Reed-Solomon codes and other cyclic codes of length 2/sup m/-1 over GF(2/sup m/) for m bits. Depending on the spectra of the codes, it is sufficient to test a small number of single-frequency pairs for orthogonality, and a pair of bases may be tested in each case simply by summing the appropriate pow... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Worst-case interactive communication. II. Two messages are not optimal

    Publication Year: 1991, Page(s):995 - 1005
    Cited by:  Papers (31)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (946 KB)

    For pt.I see ibid., vol.36, no.5, p.1111-26, (1990). The author defines the chromatic-decomposition number of a hypergraph and shows that, under general conditions, it determines the two message complexity. This result is then used to provide that two messages are not optimal. Protocols, complexities, and the characteristic hypergraph of (X,Y) are defined. The playoffs problem is described. Althou... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimally near-far resistant multiuser detection in differentially coherent synchronous channels

    Publication Year: 1991, Page(s):1006 - 1018
    Cited by:  Papers (40)  |  Patents (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1171 KB)

    The noncoherent demodulation of differentially phase-shift keyed signals transmitted simultaneously via a synchronous code-division multiple-access (CDMA) channel is studied under the assumption of white Gaussian background noise. A class of noncoherent linear detectors is defined with the objective of obtaining the optimal one. The performance criterion considered is near-far resistance that deno... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zero-crossings of a wavelet transform

    Publication Year: 1991, Page(s):1019 - 1033
    Cited by:  Papers (419)  |  Patents (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1422 KB)

    The completeness, stability, and application to pattern recognition of a multiscale representation based on zero-crossings is discussed. An alternative projection algorithm is described that reconstructs a signal from a zero-crossing representation, which is stabilized by keeping the value of the wavelet transform integral between each pair of consecutive zero-crossings. The reconstruction algorit... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum complexity density estimation

    Publication Year: 1991, Page(s):1034 - 1054
    Cited by:  Papers (200)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (2262 KB)

    The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum co... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Internal models and recursive estimation for 2-D isotropic random fields

    Publication Year: 1991, Page(s):1055 - 1066
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1046 KB)

    Efficient recursive smoothing algorithms are developed for isotropic random fields that can be obtained by passing white noise through rational filters. The estimation problem is shown to be equivalent to a countably infinite set of 1-D separable two-point boundary value smoothing problems. The 1-D smoothing problems are solved using a Markovianization approach followed by a standard 1-D smoothing... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Bayesian approach for classification of Markov sources

    Publication Year: 1991, Page(s):1067 - 1071
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (425 KB)

    A Bayesian approach for classification of Markov sources whose parameters are not explicitly known is developed and studied. A universal classifier is derived and shown to achieve, within a constant factor, the minimum error probability in a Bayesian sense. The proposed classifier is based on sequential estimation of the parameters of the sources, and it is closely related to earlier proposed univ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theory of lattice-based fine-coarse vector quantization

    Publication Year: 1991, Page(s):1072 - 1084
    Cited by:  Papers (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1133 KB)

    The performance of a lattice-based fast vector quantization (VQ) method, which yields rate-distortion performance to that of an optimal VQ, is analyzed. The method, which is a special case of fine-coarse vector quantization (FCVQ), uses the cascade of a fine lattice quantizer and a coarse optimal VQ to encode a given source vector. The second stage is implemented in the form of a lookup table, whi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Alexander Barg

Department of Electrical and Computer Engineering and the Institute for Systems Research, University of Maryland

email: abarg-ittrans@ece.umd.edu