IEEE Transactions on Information Theory

Volume 37 Issue 4 • Jul 1991

Filter Results

Displaying Results 1 - 25 of 37
  • A Bayesian approach for classification of Markov sources

    Publication Year: 1991, Page(s):1067 - 1071
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (352 KB)

    A Bayesian approach for classification of Markov sources whose parameters are not explicitly known is developed and studied. A universal classifier is derived and shown to achieve, within a constant factor, the minimum error probability in a Bayesian sense. The proposed classifier is based on sequential estimation of the parameters of the sources, and it is closely related to earlier proposed univ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Internal models and recursive estimation for 2-D isotropic random fields

    Publication Year: 1991, Page(s):1055 - 1066
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (884 KB)

    Efficient recursive smoothing algorithms are developed for isotropic random fields that can be obtained by passing white noise through rational filters. The estimation problem is shown to be equivalent to a countably infinite set of 1-D separable two-point boundary value smoothing problems. The 1-D smoothing problems are solved using a Markovianization approach followed by a standard 1-D smoothing... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum complexity density estimation

    Publication Year: 1991, Page(s):1034 - 1054
    Cited by:  Papers (192)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1928 KB)

    The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum co... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zero-crossings of a wavelet transform

    Publication Year: 1991, Page(s):1019 - 1033
    Cited by:  Papers (409)  |  Patents (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1208 KB)

    The completeness, stability, and application to pattern recognition of a multiscale representation based on zero-crossings is discussed. An alternative projection algorithm is described that reconstructs a signal from a zero-crossing representation, which is stabilized by keeping the value of the wavelet transform integral between each pair of consecutive zero-crossings. The reconstruction algorit... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional harmonic retrieval and its time-domain analysis technique

    Publication Year: 1991, Page(s):1185 - 1188
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (300 KB)

    The two-dimensional harmonic retrieval is examined in theory by confirming that the 2-D sinusoids in white noise are modeled as a special 2-D autoregressive moving average (ARMA) process whose AR parameters are identical to the MA ones. A new analysis technique for resolving 2-D sinusoids in white noise is proposed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimally near-far resistant multiuser detection in differentially coherent synchronous channels

    Publication Year: 1991, Page(s):1006 - 1018
    Cited by:  Papers (40)  |  Patents (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (996 KB)

    The noncoherent demodulation of differentially phase-shift keyed signals transmitted simultaneously via a synchronous code-division multiple-access (CDMA) channel is studied under the assumption of white Gaussian background noise. A class of noncoherent linear detectors is defined with the objective of obtaining the optimal one. The performance criterion considered is near-far resistance that deno... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Strengthening Simmons' bound on impersonation

    Publication Year: 1991, Page(s):1182 - 1185
    Cited by:  Papers (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (336 KB)

    Simmons' lower bound on impersonation P1⩾2 -I(M;E) where M and E denote the message and the encoding rule, respectively, is strengthened by maximizing over the source statistics and by allowing dependence between the message and the encoding rule. The authors show that a refinement of their argument, which removes the assumption of independence ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Index system and separability of constant weight Gray codes

    Publication Year: 1991, Page(s):1229 - 1233
    Cited by:  Papers (6)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (396 KB)

    A number system is developed for the conversion of natural numbers to the codewords of the Gray code G(n,k) of length n and weight k, and vice versa. The focus is on the subcode G(n,k) of G(n) consisting of those words of G(n) with precisely k 1-bits, 0<k<n.... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multivariate probability density deconvolution for stationary random processes

    Publication Year: 1991, Page(s):1105 - 1115
    Cited by:  Papers (44)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (688 KB)

    The kernel-type estimation of the joint probability density functions of stationary random processes from noisy observations is considered. Precise asymptotic expressions and bounds on the mean-square estimation error are established, along with rates of mean-square convergence, for processes satisfying a variety of mixing conditions. The dependence of the convergence rates on the joint density of... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the practical implication of mutual information for statistical decisionmaking

    Publication Year: 1991, Page(s):1151 - 1156
    Cited by:  Papers (10)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (500 KB)

    A basic mathematical function that conjoins the two key conceptions of mutual information and Bayes risk is defined. Based on that function, some asymptotic theorems that verify an important implication of mutual information in the context of practical Bayesian decisionmaking are proven View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Worst-case interactive communication. II. Two messages are not optimal

    Publication Year: 1991, Page(s):995 - 1005
    Cited by:  Papers (29)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (792 KB)

    For pt.I see ibid., vol.36, no.5, p.1111-26, (1990). The author defines the chromatic-decomposition number of a hypergraph and shows that, under general conditions, it determines the two message complexity. This result is then used to provide that two messages are not optimal. Protocols, complexities, and the characteristic hypergraph of (X,Y) are defined. The playoffs problem is describe... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some contributions to a frequency location method due to He and Kedem

    Publication Year: 1991, Page(s):1177 - 1182
    Cited by:  Papers (17)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (512 KB)

    The author derives a useful information-theoretical technology from the tunable filter that automatically detects the frequency of an unknown signal in white Gaussian noise by adjusting its parameter to the correlation coefficient of its output. The tunable filter earlier proposed by S. He and B. Kedem (see ibid., vol.35, no.2, p.360-9, 1989) is known as the HK filter. The methodology sidesteps th... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Encoding unique global minima in nested neural networks

    Publication Year: 1991, Page(s):1158 - 1162
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (424 KB)

    Nested neural networks are constructed from outer products of patterns over {-1,0,1}N, whose nonzero bits define subnetworks and the subcodes stored in them. The set of permissible words, which are network-size binary patterns composed of subcode words that agree in their common bits, is characterized and their number is derived. It is shown that if the bitwise products of the subcode w... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Gaarder-Slepian “tracking system” conjecture [source coding]

    Publication Year: 1991, Page(s):1165 - 1168
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (356 KB)

    The authors examine under what conditions, and with what notion of optimality, the coder of an optimal system can be operated with the information which is also accessible to the decoder. After a discussion of the difficulties involved, a theorem is proved for Markov sources which states that the extra memory of the coder can be substituted with independent randomization. The result is not tied to... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on the redundancy of binary alphabetical codes

    Publication Year: 1991, Page(s):1225 - 1229
    Cited by:  Papers (18)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (416 KB)

    An alphabetical code is a code in which the numerical binary order of the codewords corresponds to the alphabetical order of the encoded symbols. A necessary and sufficient condition for the existence of a binary alphabetical code is presented. The redundancy of the optimum binary alphabetical code is given in comparison with the Huffman code and its upper bound, which is tighter than bounds previ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Explicit formulas for self-complementary normal bases in certain finite fields

    Publication Year: 1991, Page(s):1220 - 1222
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (180 KB)

    Explicit formulas are given for sets of p elements forming a self-complementary normal basis of GF(qp) over GF(q), where p is the characteristic of GF(q ). Using these formulas, a straightforward construction of self-complementary bases for GF(qα) (where α=pm) over GF(q) is also pre... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On extremal self-dual quaternary codes of lengths 18 to 28. II

    Publication Year: 1991, Page(s):1206 - 1216
    Cited by:  Papers (13)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (940 KB)

    For pt.I see ibid., vol.36, no.3, p.651-60 (1990). A general decomposition theorem is applied to find all extremal self-dual quaternary codes of lengths 18 to 28 that have a nontrivial monomial automorphism of order a power of 3. Techniques to distinguish these codes are also presented. The author presents situations in which the equivalence of the codes under consideration can be decided View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New bounds on the redundancy of Huffman codes

    Publication Year: 1991, Page(s):1095 - 1104
    Cited by:  Papers (28)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (596 KB)

    Upper and lower bounds are obtained for the redundancy of binary Huffman codes for a memoryless source whose least likely source letter probability is known. Tight upper bounds on redundancy in terms of the most and least likely source letter probabilities are provided View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decoding binary 2-D cyclic codes by the 2-D Berlekamp-Massey algorithm

    Publication Year: 1991, Page(s):1200 - 1203
    Cited by:  Papers (18)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (376 KB)

    A method of decoding two-dimensional (2-D) cyclic codes by applying the 2-D Berlekamp-Massey algorithm is proposed. To explain this decoding method, the author introduces a subclass of 2-D cyclic codes, which are called 2-D BCH codes due to their similarity with BCH codes. It is shown that there are some short 2-D cyclic codes with a better cost parameter value. The merit of the approach is verifi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reuseable memories in the light of the old arbitrarily varying and a new outputwise varying channel theory

    Publication Year: 1991, Page(s):1143 - 1150
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (544 KB)

    Arbitrarily varying channels have been introduced as a model for transmission in cases of jamming. It is shown that this theory applies naturally to memories and yields, in a unified way, some new and old capacity theorems for write-unidirectional memories with side information. The role of cycles via outputwise varying channels is discussed. Exact conditions for memories to have positive capacity... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonality of binary codes derived from Reed-Solomon codes

    Publication Year: 1991, Page(s):983 - 994
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (860 KB)

    The author provides a simple method for determining the orthogonality of binary codes derived from Reed-Solomon codes and other cyclic codes of length 2m-1 over GF(2m) for m bits. Depending on the spectra of the codes, it is sufficient to test a small number of single-frequency pairs for orthogonality, and a pair of bases may be tested in each case simply by summing... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flicker noise and the estimation of the Allan variance

    Publication Year: 1991, Page(s):1173 - 1177
    Cited by:  Papers (15)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (300 KB)

    Flicker noise is a random process observed in a variety of contexts, including current fluctuations in metal film and semiconductor devices, loudness fluctuations in speech and music, and neurological patterns. The quadratic-mean convergence of appropriate estimates of the Allan variance for flicker noise is established when the latter is modeled as a stochastic process with stationary increments.... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theory of lattice-based fine-coarse vector quantization

    Publication Year: 1991, Page(s):1072 - 1084
    Cited by:  Papers (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (956 KB)

    The performance of a lattice-based fast vector quantization (VQ) method, which yields rate-distortion performance to that of an optimal VQ, is analyzed. The method, which is a special case of fine-coarse vector quantization (FCVQ), uses the cascade of a fine lattice quantizer and a coarse optimal VQ to encode a given source vector. The second stage is implemented in the form of a lookup table, whi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zero-crossing rates of functions of Gaussian processes

    Publication Year: 1991, Page(s):1188 - 1194
    Cited by:  Papers (27)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (560 KB)

    Formulas for the expected zero-crossing rates of random processes that are monotone transformations of Gaussian processes can be obtained by using two different techniques. The first technique involves derivation of the expected zero-crossing rate for discrete-time processes and extends the result of the continuous-time case by using an appropriate limiting argument. The second is a direct method ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Note on `The calculation of the probability of detection and the generalized Marcum Q-function'

    Publication Year: 1991
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (88 KB)

    The author presents corrections to his original paper (see ibid., vol.35, no.2, p.389-400, 1989). The corrections concern computational cases using the steepest descent integration technique. It is pointed out that, for certain specific parameter ranges, the calculation error is too large to be accounted for by accumulated round-off error View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Prakash Narayan 

Department of Electrical and Computer Engineering