Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Information Theory, IEEE Transactions on

Issue 3 • Date May 1996

Filter Results

Displaying Results 1 - 25 of 37
  • An algorithm for identifying rate (n-1)/n catastrophic punctured convolutional encoders

    Publication Year: 1996 , Page(s): 1010 - 1013
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (390 KB)  

    It is known that both Viterbi and sequential decoding of convolutional codes can be greatly simplified by employing punctured convolutional codes, which are obtained by periodically deleting a part of the bits of a low-rate convolutional code. Even if the original low-rate convolutional code is noncatastrophic, some deleting maps may result in rate (n-1)/n catastrophic punctured encoders. An algorithm is presented to identify such encoders when the original rate 1/b encoder is antipodal. The major part of the algorithm solves a linear equation of /spl nu/+1 variables, where /spl nu/ is the constraint length of the original rate 1/b code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Contributors

    Publication Year: 1996
    Save to Project icon | Request Permissions | PDF file iconPDF (650 KB)  
    Freely Available from IEEE
  • More on the covering radius of BCH codes

    Publication Year: 1996 , Page(s): 1023 - 1028
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    New lower bounds on the minimum length of t-error-correcting BCH codes with covering radius at most 2t are derived View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum-likelihood parameter estimation of the harmonic, evanescent, and purely indeterministic components of discrete homogeneous random fields

    Publication Year: 1996 , Page(s): 916 - 930
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1144 KB)  

    This paper presents a maximum-likelihood solution to the general problem of fitting a parametric model to observations from a single realization of a two-dimensional (2-D) homogeneous random field with mixed spectral distribution. On the basis of a 2-D Wold-like decomposition, the field is represented as a sum of mutually orthogonal components of three types: purely indeterministic, harmonic, and evanescent. The suggested algorithm involves a two-stage procedure. In the first stage, we obtain a suboptimal initial estimate for the parameters of the spectral support of the evanescent and harmonic components. In the second stage, we refine these initial estimates by iterative maximization of the conditional likelihood of the observed data, which is expressed as a function of only the parameters of the spectral supports of the evanescent and harmonic components. The solution for the unknown spectral supports of the harmonic and evanescent components reduces the problem of solving for the other unknown parameters of the field to a linear least squares. The Cramer-Rao lower bound on the accuracy of jointly estimating the parameters of the different components is derived, and it is shown that the bounds on the purely indeterministic and deterministic components are decoupled. Numerical evaluation of the bounds provides some insight into the effects of various parameters on the achievable estimation accuracy. The performance of the maximum-likelihood algorithm is illustrated by Monte Carlo simulations and is compared with the Cramer-Rao bound View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On breaking a Huffman code

    Publication Year: 1996 , Page(s): 972 - 976
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    We examine the problem of deciphering a file that has been Huffman coded, but not otherwise encrypted. We find that a Huffman code can be surprisingly difficult to cryptanalyze. We present a detailed analysis of the situation for a three-symbol source alphabet and present some results for general finite alphabets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Number/theoretic solutions to intercept time problems

    Publication Year: 1996 , Page(s): 959 - 971
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1348 KB)  

    A good radar warning receiver should observe a radar very soon after it begins transmitting, so in designing our radar warning receiver we would like to ensure that the intercept time is low or the probability of intercept after a specified time is high. We consider a number of problems concerning the overlaps or coincidences of two periodic pulse trains. We show that the first intercept time of two pulse trains started in phase is a homogeneous Diophantine approximation problem which can be solved using the convergents of the simple continued fraction (s.c.f.) expansion of the ratio of their pulse repetition intervals (PRIs). We find that the intercept time for arbitrary starting phases is an inhomogeneous Diophantine approximation problem which can be solved in a similar manner. We give a recurrence equation to determine the times at which subsequent coincidences occur. We then demonstrate how the convergents of the s.c.f. expansion can be used to determine the probability of intercept of the two pulse trains after a specified time when one or both of the initial phases are random. Finally, we discuss how the probability of intercept varies as a function of the PRIs and its dependence on the Farey points View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rotational invariance of trellis codes. II. Group codes and decoders

    Publication Year: 1996 , Page(s): 766 - 778
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1096 KB)  

    For pt.I see ibid., vol.42, no.3, p.751-65 (1996). In Part I, general results on rotationally invariant codes and encoders were derived assuming no algebraic structure. In Part II, trellis codes based on group systems are considered as a special case for which code and encoder constructions are particularly simple. Rotational invariance is expressed as an algebraic constraint on a group code, and algebraic constructions are found for both “absorbed precoder” encoders and for encoders with separate differential precoders. Finally, the various encoder forms used to achieve rotational invariance are compared based on their performance on an AWGN channel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The CEO problem [multiterminal source coding]

    Publication Year: 1996 , Page(s): 887 - 902
    Cited by:  Papers (234)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1096 KB)  

    We consider a new problem in multiterminal source coding motivated by the following decentralized communication/estimation task. A firm's Chief Executive Officer (CEO) is interested in the data sequence {X(t)} t=1 which cannot be observed directly, perhaps because it represents tactical decisions by a competing firm. The CEO deploys a team of L agents who observe independently corrupted versions of {X(t)}t=1. Because {X(t)} is only one among many pressing matters to which the CEO must attend, the combined data rate at which the agents may communicate information about their observations to the CEO is limited to, say, R bits per second. If the agents were permitted to confer and pool their data, then in the limit as L→∞ they usually would be able to smooth out their independent observation noises entirely. Then they could use their R bits per second to provide the CEO with a representation of {X(t)} with fidelity D(R), where D(·) is the distortion-rate function of {X(t)}. In particular, with such data pooling D can be made arbitrarily small if R exceeds the entropy rate H of {X(t)}. Suppose, however, that the agents are not permitted to convene, Agent i having to send data based solely on his own noisy observations {Yi(t)}. We show that then there does not exist a finite value of R for which even infinitely many agents can make D arbitrarily small. Furthermore, in this isolated-agents case we determine the asymptotic behavior of the minimal error frequency in the limit as L and then R tend to infinity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Locally optimum distributed detection of correlated random signals based on ranks

    Publication Year: 1996 , Page(s): 931 - 942
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB)  

    Distributed signal detection schemes have received significant attention, but most research has focused on cases where the observations at the different sensors are independent and the statistical model for the observations is completely known. If the observations at the different sensors consist of noisy versions of random signals which were produced by the same source, then these observations may not be independent. It is also possible that the noise distribution may not be completely known. Cases where weak random signals are observed in possibly non-Gaussian additive noise are considered. The focus is on cases where the sensor tests are based only on the ranks and signs of the observations. Numerical results are provided which indicate that distributed schemes based on ranks and signs are less sensitive to the exact noise statistics when compared to optimum schemes based directly on the observations. This is especially true for some cases where the actual noise distribution has heavy tails, which can cause the optimum schemes based directly on the observations to perform poorly. Analytical forms are given for the locally optimum sensor test statistics based on the ranks and signs of the observations, and we use these to find the best distributed detection schemes for some cases. In the course of obtaining our results, a general set of necessary conditions is given which provide the analytical forms of the locally optimum distributed sensor tests for cases where the observations are discrete random variables. Conditions of this type have not been given previously View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Near optimal single-track Gray codes

    Publication Year: 1996 , Page(s): 779 - 789
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB)  

    Single-track Gray codes are a special class of Gray codes which have advantages over conventional Gray codes in certain quantization and coding applications. The problem of constructing high period single-track Gray codes is considered. Three iterative constructions are given, along with a heuristic method for obtaining good seed-codes. In combination, these yield many families of very high period single-track Gray codes. In particular, for m⩾3, length n=2m, period 2 n-2n codes are obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficiently computed reduced-parameter input-aided MMSE equalizers for ML detection: a unified approach

    Publication Year: 1996 , Page(s): 903 - 915
    Cited by:  Papers (122)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB)  

    A unified approach for computing the optimum settings of a length-Nf input-aided equalizer that minimizes the mean-square error between the equalized channel impulse response and a target impulse response of a given length Nb is presented. This approach offers more insight into the problem, easily accommodates correlation in the input and noise sequences, leads to significant computational savings, and allows us to analyze a variety of constraints on the target impulse response besides the standard unit-tap constraint. In particular, we show that imposing a unit-energy constraint results in a lower mean-square error at a comparable computational complexity. Furthermore, we show that, under the assumed constraint of finite-length filters, the relative delay between the equalizer and the target impulse response plays a crucial role in optimizing performance. We describe a new characterization of the optimum delay and show how to compute it. Finally, we derive reduced-parameter pole-zero models of the equalizer that achieve the high performance of a long all-zero equalizer at a much lower implementation cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity, mutual information, and coding for finite-state Markov channels

    Publication Year: 1996 , Page(s): 868 - 886
    Cited by:  Papers (128)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1352 KB)  

    The finite-state Markov channel (FSMC) is a discrete time-varying channel whose variation is determined by a finite-state Markov process. These channels have memory due to the Markov channel variation. We obtain the FSMC capacity as a function of the conditional channel state probability. We also show that for i.i.d. channel inputs, this conditional probability converges weakly, and the channel's mutual information is then a closed-form continuous function of the input distribution. We next consider coding for FSMCs. In general, the complexity of maximum-likelihood decoding grows exponentially with the channel memory length. Therefore, in practice, interleaving and memoryless channel codes are used. This technique results in some performance loss relative to the inherent capacity of channels with memory. We propose a maximum-likelihood decision-feedback decoder with complexity that is independent of the channel memory. We calculate the capacity and cutoff rate of our technique, and show that it preserves the capacity of certain FSMCs. We also compare the performance of the decision-feedback decoder with that of interleaving and memoryless channel coding on a fading channel with 4PSK modulation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Necessary conditions for optimum distributed sensor detectors under the Neyman-Pearson criterion

    Publication Year: 1996 , Page(s): 990 - 994
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB)  

    Distributed signal detection schemes that are optimum under the Neyman-Pearson criterion continue to be of interest. The functional forms of these schemes can be difficult to specify, especially for cases with dependent observations from sensor to sensor. For cases with dependent observations from sensor to sensor, the optimum sensor test statistics are generally not the likelihood ratios of the sensor observations. Equations expressing the forms of the optimum sensor test statistics in terms of the other optimum test statistics and the optimum fusion rule are given. Detailed proofs of these results are given in this correspondence and have not been given previously. In some communication, radar, and sonar system problems the amplitude of the received signal may be unknown, but the signal may be known to be weak. Equations expressing the forms of the optimum sensor test statistics for such cases are given. These expressions have already been shown to be useful for interpreting and finding optimum distributed detection schemes, but detailed proofs of the type given here have not yet been given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An on-line universal lossy data compression algorithm via continuous codebook refinement. II. Optimality for phi-mixing source models

    Publication Year: 1996 , Page(s): 822 - 836
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1088 KB)  

    For pt.I see ibid., vol.42, no.3, p.803-21 (1996). Two versions of the gold-washing data compression algorithm, one with codebook innovation interval and the other with finitely many codebook innovations, are considered. The version of the gold-washing algorithm with codebook innovation interval k is a variant of the gold-washing algorithm such that the codebook is innovated once every k+1 source words during the process of encoding the entire source. It is demonstrated that when this version of the gold-washing algorithm is applied to encode a stationary, φ-mixing source, the expected distortion performance converges to the distortion rate function of the source as the codebook length goes to infinity. Furthermore, if the source to be encoded is a Markov source or a finite-state source, then the corresponding sample distortion performance converges almost surely to the distortion rate function. The version of the gold-washing algorithm with finitely many codebook innovations is a variant of the gold-washing algorithm in which after finitely many codebook innovations, the codebook is held fixed and reused to encode the forthcoming source sequence block by block. Similar results are shown for this version of the gold-washing algorithm. In addition, the convergence speed of the algorithm is discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal coding for correlated sources with linked encoders

    Publication Year: 1996 , Page(s): 837 - 847
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB)  

    On the coding for correlated sources we extend the Slepian-Wolf (1973) data compression system (called the SW system) to define a new system (called the SWL system), where two separate encoders of the SW system are mutually linked. Determining the optimal error exponent for all rates inside the admissible rate region remains an open problem for the SW system. We completely solve this problem for the SWL system, and show that the optimal exponents can be achieved by universal codes. Furthermore, it is shown that the linkage of two encoders does not extend the admissible rate region and does not even improve the exponent of correct decoding outside this region. The zero error data transmission problem for the SWL system is also considered. We determine the zero error rate region, the admissible rate region under the condition that the decoding error probability is strictly zero, and show that this region can be attained by universal codes. Furthermore, we make it clear that the linkage of encoders enlarges the zero error rate region. It is interesting to note that the above results for the SWL system correspond in some sense to the previous results for the discrete memoryless channel with feedback View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Corrections to and comments on “Minimum-bias windows for high-resolution spectral estimates” by Athanasios Papoulis

    Publication Year: 1996
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB)  

    Presents corrections to and comments on the paper by A. Papoulis (IEEE Trans. Inform. Theory, vol.IT-19, no.1, p.9-12, 1973) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of some new efficient balanced codes

    Publication Year: 1996 , Page(s): 790 - 802
    Cited by:  Papers (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB)  

    A balanced code with r check bits and k information bits is a binary code of length k+r and cardinality 2k such that each codeword is balanced; that is, it has [(k+r)/2] 1's and [(k+r)/2] 0's. This paper contains new methods to construct efficient balanced codes. To design a balanced code, an information word with a low number of 1's or 0's is compressed and then balanced using the saved space. On the other hand, an information word having almost the same number of 1's and 0's is encoded using the single maps defined by Knuth's (1986) complementation method. Three different constructions are presented. Balanced codes with r check bits and k information bits with k⩽2r+1-2, k⩽3×2r-8, and k⩽5×2r-10r+c(r), c(r)∈{-15, -10, -5, 0, +5}, are given, improving the constructions found in the literature. In some cases, the first two constructions have a parallel coding scheme View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotic statistical analysis of the high-order ambiguity function for parameter estimation of polynomial-phase signals

    Publication Year: 1996 , Page(s): 995 - 1001
    Cited by:  Papers (38)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB)  

    The high-order ambiguity function (HAF) is a nonlinear operator designed to detect, estimate, and classify complex signals whose phase is a polynomial function of time. The HAF algorithm, introduced by Peleg and Porat (1991), estimates the phase parameters of polynomial-phase signals measured in noise. The purpose of this correspondence is to analyze the asymptotic accuracy of the HAF algorithm in the case of additive white Gaussian noise. It is shown that the asymptotic variances of the estimates are close to the Cramer-Rao bound (CRB) for high SNR. However, the ratio of the asymptotic variance and the CRB has a polynomial growth in the noise variance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the covering radius of R(1, m) in R(3, m)

    Publication Year: 1996 , Page(s): 1035 - 1037
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    We prove that the covering radius of R(1, 11) in R(3, 11) is 992, and that the covering radius of R(1, 13) in R(3, 13) is 4032, both not exceeding the quadratic bound View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Huffman coding with an infinite alphabet

    Publication Year: 1996 , Page(s): 977 - 984
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB)  

    A new type of sufficient condition is provided for a probability distribution on the nonnegative integers to be given an optimal D-ary prefix code by a Huffman-type algorithm. In the justification of our algorithm, we introduce two new (essentially one) concepts as the definition of the “optimality” of a prefix D-ary code, which are shown to be equivalent to that defined in the traditional way. These new concepts of the optimality are meaningful even for the case where the Shannon entropy H(P) diverges View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On a conjecture of Helleseth regarding pairs of binary m-sequences

    Publication Year: 1996 , Page(s): 988 - 990
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    Binary m-sequences are maximal-length sequences generated by shift registers of length m, that are employed in navigation, radar, and spread-spectrum communication. It is well known that given a pair of distinct m-sequences, the crosscorrelation function must take on at least three values. This correspondence addresses a conjecture made by Helleseth in 1976, that if m is a power of 2, then there are no pairs of binary m-sequences with a 3-valued crosscorrelation function. This conjecture is proved under the assumption that the three correlation values are symmetric about -1 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimations of the transfer functions of noncatastrophic convolutional encoders

    Publication Year: 1996 , Page(s): 1014 - 1021
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB)  

    A computational method, which allows us to upper-bound the solution of a wide class of systems of linear recurrent equations, is proposed. This method is used to estimate the transfer functions of noncatastrophic convolutional encoders View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast generalized minimum-distance decoding of algebraic-geometry and Reed-Solomon codes

    Publication Year: 1996 , Page(s): 721 - 737
    Cited by:  Papers (35)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1480 KB)  

    Generalized minimum-distance (GMD) decoding is a standard soft-decoding method for block codes. We derive an efficient general GMD decoding scheme for linear block codes in the framework of error-correcting pairs. Special attention is paid to Reed-Solomon (RS) codes and one-point algebraic-geometry (AG) codes. For RS codes of length n and minimum Hamming distance d the GMD decoding complexity turns out to be in the order O(nd), where the complexity is counted as the number of multiplications in the field of concern. For AG codes the GMD decoding complexity is highly dependent on the curve in consideration. It is shown that we can find all relevant error-erasure-locating functions with complexity O(o1nd), where o1 is the size of the first nongap in the function space associated with the code. A full GMD decoding procedure for a one-point AG code can be performed with complexity O(dn2) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Polyspectral analysis of mixed processes and coupled harmonics

    Publication Year: 1996 , Page(s): 943 - 958
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB)  

    Polyspectral analysis of processes with mixed spectra is considered, and scaled polyperiodograms are introduced to clarify issues related to stationarity, ergodicity, and suppression of additive stationary noise in harmonic retrieval problems. Spectral and polyspectral approaches are capable of retrieving (un)coupled harmonics, not only when the harmonics have constant amplitudes, but also when they are observed in nonzero mean multiplicative noise. Fourier series polyspectra and asymptotic properties of scaled polyperiodograms provide general tools for higher order analysis of time series with mixed spectra. A single record phase coupling detector is derived to obviate the assumption of independent multiple records required by existing methods. The novelties are illustrated by simulation examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coding for channels with cost constraints

    Publication Year: 1996 , Page(s): 854 - 867
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1080 KB)  

    We address the problem of finite-state code construction for the costly channel. This channel model is a generalization of the hard-constrained channel, also known as a subshift. Adler et al. (1986) developed the powerful state-splitting algorithm for use in the construction of finite-state codes for hard-constrained channels. We extend the state-splitting algorithm to the costly channel. We construct synchronous (fixed-length to fixed-length) and asynchronous (variable-length to fixed-length) codes. We present several examples of costly channels related to magnetic recording, the telegraph channel, and shaping gain in modulation. We design a number of codes, some of which come very close to achieving capacity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering