By Topic

Information Theory, IEEE Transactions on

Issue 3 • Date May 1994

Filter Results

Displaying Results 1 - 25 of 40
  • A class of constacyclic codes

    Page(s): 951 - 954
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB)  

    A class of qm-ary constacyclic codes is defined which has the property that the q-ary image is equivalent to a class of shortened cyclic codes. This description leads to the construction of nonlinear codes. As an example one can construct nonlinear codes of length N=qm+1 over the field GF(qm'), m'⩽m, with a minimum distance greater than N(1-R)m'/m, where R is the rate. For higher rates these codes are the best known. The codes are easily encoded and decoded View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the principal state method for run-length limited sequences

    Page(s): 934 - 941
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB)  

    Presents a detailed result on Franaszek's (1968) principal state method for the generation of run-length constrained codes. The authors show that, whenever the constraints k and d satisfy k⩾2d>0, the set of “principal states” is s0, s1, ···, sk-1. Thus there is no need for Franaszek's search algorithm anymore. The counting technique used to obtain this result also shows that “state independent decoding” can be achieved using not more than three codewords per message. Previously, it was not known beforehand that one could use fewer codewords per message than there were principal states. The counting technique also allows one to compare the principal state method with other practical schemes originating from the work of Tang and Bahl (1970) and allows one to use an efficient enumerative coding implementation of the encoder and decoder View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the bounds on odd correlation of sequences

    Page(s): 954 - 955
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    A construction, similar to that of Sidelnikov (1971) and Welch (1974), is proposed to transform bounds on inner products into bounds on odd correlations of a set of sequences with equal energy. Contrary to Sarwate's (1979) result, the present authors show that bounds on odd cross-correlation and autocorrelation can easily be derived from Welch's more general bounds. Generally speaking, since all known lower bounds on periodic correlation have been derived from bounds on inner products, the construction implies that the periodic and odd correlation can share all of these lower bounds View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A sharp false alarm upper-bound for a matched filter bank detector

    Page(s): 955 - 960
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB)  

    A sharp upper bound on the probability of false alarm, PF , of a matched filter bank detector over the class of input variates that are zero mean and have a specified covariance matrix, R xx, is derived. This bound is a function of the detector threshold, T, and Rxx. It is shown that the bound is inversely proportional to T2. Hence there may be a wide variation of PF over the class since the PF for Gaussian inputs varies as exp(-T2) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non white Gaussian multiple access channels with feedback

    Page(s): 885 - 892
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB)  

    Although feedback does not increase the capacity of an additive white noise Gaussian channel, it enables prediction of the noise for non-white additive Gaussian noise channels and results in an improvement of capacity, but at most by a factor of 2 (Pinsker, Ebert, Pombra, and Cover). Although the capacity of white noise channels cannot be increased by feedback, multiple access white noise channels have a capacity increase due to the cooperation induced by feedback. Thomas has shown that the total capacity (sum of the rates of all the senders) of an m-user Gaussian white noise multiple access channel with feedback is less than twice the total capacity without feedback. The present authors show that this factor of 2 bound holds even when cooperation and prediction are combined, by proving that feedback increases the total capacity of an m-user multiple access channel with non-white additive Gaussian noise by at most a factor of 2 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal shaping properties of the truncated polydisc

    Page(s): 892 - 903
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1160 KB)  

    Multidimensional constellation shaping with a family of regions called truncated polydiscs is studied. This family achieves maximum shaping gain for a given two-dimensional peak-to-average energy ratio or a given two-dimensional constellation expansion ratio. An efficient algorithm for mapping data words to constellation points is described that requires O(N log N) arithmetic operations and O(N2) lookup table space. Truncated polydisc shaping can easily be incorporated into standard coded modulation schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Signal detection games with power constraints

    Page(s): 795 - 807
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1048 KB)  

    Formulates and solves maximin and minimax detection problems for signals with power constraints. These problems arise whenever it is necessary to distinguish between a genuine signal and a spurious one designed by an adversary with the principal goal of deceiving the detector. The spurious (or deceptive) signal is subject to certain constraints, such as limited power, which preclude it from replicating the genuine signal exactly. The detection problem is formulated as a zero-sum game involving two players: the detector designer and the signal designer. The payoff is the probability of error of the detector, which the detector designer tries to minimize and the deceptive signal designer to maximize. For this detection game, saddle point solutions-whenever possible-or otherwise maximin and minimax solutions are derived under three distinct constraints on the deceptive signal power; these distinct constraints involve lower bounds on (i) the signal amplitude, (ii) the time-averaged power, and (iii) the expected power. The cases of independent and identically distributed and correlated signals are considered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universally ideal secret-sharing schemes

    Page(s): 786 - 794
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB)  

    Given a set of parties {1, ···, n}, an access structure is a monotone collection of subsets of the parties. For a certain domain of secrets, a secret-sharing scheme for an access structure is a method for a dealer to distribute shares to the parties. These shares enable subsets in the access structure to reconstruct the secret, while subsets not in the access structure get no information about the secret. A secret-sharing scheme is ideal if the domains of the shares are the same as the domain of the secrets. An access structure is universally ideal if there exists an ideal secret-sharing scheme for it over every finite domain of secrets. An obvious necessary condition for an access structure to be universally ideal is to be ideal over the binary and ternary domains of secrets. The authors prove that this condition is also sufficient. They also show that being ideal over just one of the two domains does not suffice for universally ideal access structures. Finally, they give an exact characterization for each of these two conditions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the weight hierarchy of geometric Goppa codes

    Page(s): 913 - 920
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    The weight hierarchy of a linear code is the set of generalized Hamming weights of the code. In the paper, the authors consider geometric Goppa codes and provide a lower bound on their generalized Hamming weights similar to Goppa's lower bound on their minimum distance. In the particular case of Hermitian codes, exact results on the second and third generalized Hamming weights are given for any m except a few cases, where m is a parameter that governs the dimension of these codes. In many instances, the authors are able to provide considerably more information on their generalized Hamming weights. An upper bound relating the generalized Hamming weights of Hermitian codes to the pole numbers at a special point on the curve is also provided. Similar results are given in the case of codes from some subfields of the Hermitian function fields, which are also maximal. Finally, a nontrivial family of codes is also presented whose weight hierarchy is completely determined View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Polynomial estimation of the amplitude of a signal

    Page(s): 960 - 965
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    The problem of estimating the amplitude of a signal is addressed using higher-order statistics. The probability distribution of the noise is assumed to be unknown so that the maximum likelihood estimator cannot be calculated. The estimator is taken as a polynomial of the observation, the coefficients of which are determined so that the estimate is unbiased with minimum variance. This method generalizes the linear approach, and the estimate variance is reduced. The ease of linear-quadratic estimation is detailed, and numerical examples are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Channel simulation and coding with side information

    Page(s): 634 - 646
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB)  

    Studies the minimum random bit rate required to simulate a random system (channel), where the simulator operates with a given external input. As measures of simulation accuracy the authors use both the variational distance and the d¯ distance between joint input-output distributions. They find the asymptotic number of random bits per input sample required for accurate simulation, as a function of the distribution of the input process. These results hold for arbitrary channels and input processes, including nonstationary and nonergodic processes and do not hinge on a specific simulation scheme. A by-product of the analysis is a general formula for the minimal achievable source coding rate with side information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the finite sample performance of the nearest neighbor classifier

    Page(s): 820 - 837
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1240 KB)  

    The finite sample performance of a nearest neighbor classifier is analyzed for a two-class pattern recognition problem. An exact integral expression is derived for the m-sample risk Rm given that a reference m-sample of labeled points is available to the classifier. The statistical setup assumes that the pattern classes arise in nature with fixed a priori probabilities and that points representing the classes are drawn from Euclidean n-space according to fixed class-conditional probability distributions. The sample is assumed to consist of m independently generated class-labeled points. For a family of smooth class-conditional distributions characterized by asymptotic expansions in general form, it is shown that the m-sample risk Rm has a complete asymptotic series expansion Rm~Rk=2ckm-kn/ (m→∞), where R denotes the nearest neighbor risk in the infinite-sample limit and the coefficients ck are distribution-dependent constants independent of the sample size m. The analysis thus provides further analytic validation of Bellman's curse of dimensionality. Numerical simulations corroborating the formal results are included, and extensions of the theory discussed. The analysis also contains a novel application of Laplace's asymptotic method of integration to a multidimensional integral where the integrand attains its maximum on a continuum of points View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Catastrophic continuous phase modulation schemes and their noncatastrophic equivalents

    Page(s): 687 - 695
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    Continuous phase modulation (CPM) schemes are bandwidth and energy efficient constant-envelope modulation schemes that can be viewed as a continuous-phase encoder (CPE) followed by a memoryless modulator (MM), where the CPE is of convolutional type. It is observed that CPM schemes can be catastrophic in the sense that pairs of input sequences that differ in an infinite number of positions can be mapped into pairs of signals with finite Euclidean distance. This can happen in spite of the fact that the CPE is never catastrophic when considered as a stand alone convolutional encoder. The necessary and sufficient condition for a general CPM scheme to be catastrophic is given. Each member of the two major families of CPM schemes, namely the LREC and the LRC, has been classified as a catastrophic or noncatastrophic scheme. For the catastrophic schemes, the probability that a catastrophic event occurs is determined. A canonical precoder which transforms each scheme of both families into an equivalent noncatastrophic scheme is derived. The equivalent noncatastrophic scheme has the same number of states as the original one. Moreover, it has the property that if two input sequences differ in the ith position, the corresponding output signals have nonzero Euclidean distance in the ith interval View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A paradigm for class identification problems

    Page(s): 696 - 705
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB)  

    The following problem arises in many applications involving classification, identification, and inference. There is a set of objects X, and a particular x ∈ X is chosen (unknown to us). Based on information obtained about x in a sequential manner, one wishes to decide whether x belongs to one class of objects A0 or a different class of objects A1. The authors study a general paradigm applicable to a broad range of problems of this type, which they refer to as problems of class identification or discernibility. They consider various types of information sequences, and various success criteria including discernibility in the limit, discernibility with a stopping criterion, uniform discernibility, and discernibility in the Cesaro sense. They consider decision rules both with and without memory. Necessary and sufficient conditions for discernibility are provided for each case in terms of separability conditions on the sets A 0 and A1. They then show that for any sets A0 and A1, various types of separability can be achieved by allowing failure on appropriate sets of small measure. Applications to problems in language identification, system identification, and discrete geometry are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the construction of Cartesian authentication codes over symplectic spaces

    Page(s): 920 - 929
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB)  

    Various constructions of authentication codes using spaces related to the general linear group have been proposed and analyzed. In the paper the authors describe two new constructions of Cartesian authentication codes using symplectic spaces. This illustrates the feasibility of codes from spaces based on geometries of the other classical groups View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The strong simplex conjecture is false

    Page(s): 721 - 731
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB)  

    The design of M average-energy-constrained signals in additive white Gaussian noise is addressed. The long-standing strong simplex conjecture, which postulates that the regular simplex signal set maximizes the probability of correct detection under an average-energy constraint, is disproven. A signal set is presented that performs better than the regular simplex signal set at low signal-to-noise ratios for all M⩾7. This leads to the result that, for all M⩾7, there is no signal set of M signals which is optimal at all signal-to-noise ratios. Furthermore, the optimal signal set at low signal-to-noise ratios is not an equal energy set for any M⩾7. The regular simplex is shown to be the unique signal set which maximizes the minimum distance between signals. It follows that a signal set which maximizes the minimum distance is not necessarily optimum. However, the regular simplex is shown to be globally optimum in the sense of uniquely maximizing the union bound on error probability at all signal-to-noise ratios View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating binary sequences for stochastic computing

    Page(s): 716 - 720
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB)  

    The paper describes techniques for constructing statistically independent binary sequences with prescribed ratios of zeros and ones. The first construction is a general recursive construction, which forms the sequences from a class of “elementary” sequences. The second construction is a special construction which can be used when the ratio of ones to zeros is expressed in binary notation. The second construction is shown to be optimal in terms of the numbers of input sequences required to construct the desired sequence. The paper concludes with a discussion of how to generate independent “elementary” sequences using simple digital techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trellis-based scalar-vector quantizer for memoryless sources

    Page(s): 860 - 870
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1044 KB)  

    The paper describes a structured vector quantization approach for stationary memoryless sources that combines the scalar-vector quantizer (SVQ) ideas (Laroia and Farvardin, 1993) with trellis coded quantization (Marcellin and Fischer, 1990). The resulting quantizer is called the trellis-based scalar-vector quantizer (TB-SVQ). The SVQ structure allows the TB-SVQ to realize a large boundary gain while the underlying trellis code enables it to achieve a significant portion of the total granular gain. For large block-lengths and powerful (possibly complex) trellis codes the TB-SVQ can, in principle, achieve the rate-distortion bound. As indicated by the results obtained, even for reasonable block-lengths and relatively simple trellis codes, the TB-SVQ outperforms all other fixed-rate quantizers at reasonable complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nondirect convergence radius and number of iterations of the Hopfield associative memory

    Page(s): 838 - 847
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (852 KB)  

    Considers a Hopfield associative memory consisting of n neurons, designed to store an m-set of n-dimensional ±1 statistically independent uniformly distributed random vectors (fundamental memories), using a connection matrix, constructed by the usual Hebbian rule. Previous results have indicated that the maximal value of m, such that almost all m vectors are stable points of the memory, in probability (i.e., with probability approaching one as n approaches infinity), is n/(2 log n)(n/(4 log n) if all m vectors must be stable simultaneously, in probability). Previous work further analyzed the direct convergence (i.e., convergence in one iteration) error-correcting power of the Hopfield memory. The present authors rigorously analyze the general case of nondirect convergence, and prove that in the m=n/(2 log n) case, independently of the operation mode used (synchronous or asynchronous), almost all memories have an attraction radius of size n/2 around them (in the n/(4 log n) case, all memories have such an attraction radius, in probability). This result, which was conjectured in the past but was never proved rigorously, combined with an old converse result that the network cannot store more than n/(2 log n)(n/(4 log n)) fundamental memories, gives a full picture of the error-correcting power of the Hebbian Hopfield network. The authors also upper bound the number of iterations required to achieve convergence View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The strong law of large numbers for sequential decisions under uncertainty

    Page(s): 609 - 633
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2356 KB)  

    Combines optimization and ergodic theory to characterize the optimum long-run average performance that can be asymptotically attained by nonanticipating sequential decisions. Let {Xt} be a stationary ergodic process, and suppose an action bt must be selected in a space ℬ with knowledge of the t-past (X0, ···, Xt-1) at the beginning of every period t⩾0. Action bt will incur a loss l(bt, Xt) at the end of period t when the random variable Xt is revealed. The author proves under mild integrability conditions that the optimum strategy is to select actions that minimize the conditional expected loss given the currently available information at each step. The minimum long-run average loss per decision can be approached arbitrarily closely by strategies that are finite-order Markov, and under certain continuity conditions, it is equal to the minimum expected loss given the infinite past. If the loss l(b, x) is bounded and continuous and if the space ℬ is compact, then the minimum can be asymptotically attained, even if the distribution of the process {Xt} is unknown a priori and must be learned from experience View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Granular quantization noise in a class of delta-sigma modulators

    Page(s): 848 - 859
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB)  

    The trend toward digital signal processing in communication systems has resulted in a large demand for fast accurate analog-to-digital (A/D) converters, and advances in VLSI technology have made ΔΣ modulator-based A/D converters attractive solutions. However, rigorous theoretical analyses have only been performed for the simplest ΔΣ modulator architectures. Existing analyses of more complicated ΔΣ modulators usually rely on approximations and computer simulations. In the paper, a rigorous analysis of the granular quantization noise in a general class of ΔΣ modulators is developed. Under the assumption that some input-referred circuit noise or dither is present, the second-order asymptotic statistics of the granular quantization noise sequences are determined and ergodic properties are derived View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Norm quadratic-residue codes

    Page(s): 946 - 949
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB)  

    Introduces a new class of binary linear error-correcting codes, based on the concept of the finite upper half-plane. A norm quadratic-residue code Cp has p(p-1) coordinate places labeled by ordered pairs (a,b), where p is a prime of the form 4m+1 and a,b ∈ GF(p), b≠0. Some fundamental properties of Cp are established View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduced-state sequence detection with convolutional codes

    Page(s): 965 - 972
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB)  

    Reduced-state sequence detection (RSSD) reduces the state trellis of a channel code by forming the states into classes. States within a class are such that paths into the states lie further than a distance parameter d from each other. An RSSD decoder retains only one survivor per class at each trellis level. The authors apply RSSD to ordinary binary convolutional codes. They first give a class-forming algorithm that finds the greatest reduction. It turns out that no commonly tabulated good code benefits from RSSD. However, RSSD is an effective way to repair weaker codes, such as quick look-in and RCPC codes. Finally, the authors show that RSSD cannot be more efficient than the M-algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Limitations of the capacity of the M-user binary adder channel due to physical considerations

    Page(s): 662 - 673
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    The capacity of the M-user binary adder channel, subjected to various restrictions of physical nature, is investigated. The underlying propagation media considered are (i) fiber-optic, with lossless coupling and Poisson statistics, (ii) radio, under Rayleigh fading, and (iii) radio with constant amplitudes and random phases. Whereas the capacity of the unrestricted (ideal) model for the binary adder channel is known to increase without limit with the number of users, it is shown in the present paper that, for each of these cases, the total capacity is upper-bounded by a constant independent of the number of users: in case (i) by 1.7QT bits per channel use, where QT is the parameter of the Poisson process, in case (ii) by 4.33 bits per channel use, and in case (iii) by 4.27 bits per channel use View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compressing inconsistent data

    Page(s): 706 - 715
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB)  

    In a frequent practical situation one possesses inconsistent fragmentary data concerning some industrial process or natural phenomenon. It is an interesting and reasonable task to assess what the most concise way to store or transmit them would be. The authors consider the zero-error case of the problem, i.e., we would like to save all the data incorporating them into the most concise but necessarily alternative consistent data structures. More precisely, we want to find a set of alternatives which requires the minimum total storage place. From the mathematical viewpoint the model is information-theoretic and gives a common framework to deal with many combinatorial problems in the theory of extremal hypergraphs. From the practical viewpoint the interest of the mathematical theory is to produce new information measures capturing the inconsistency in the data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering