By Topic

Information Theory, IEEE Transactions on

Issue 5 • Date May 2007

Filter Results

Displaying Results 1 - 25 of 35
  • Table of contents

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Theory publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Constrained Information Combining: Theory and Applications for LDPC Coded Systems

    Page(s): 1617 - 1643
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1078 KB) |  | HTML iconHTML  

    This paper tightens previous information combining bounds on the performance of iterative decoding of binary low-density parity-check (LDPC) codes over binary-input symmetric-output channels by tracking the probability of erroneous bit in conjunction with mutual information. Evaluation of the new bounds as well as of other known bounds on different LDPC ensembles demonstrates sensitivity of the finite dimensional iterative bounds to lambda2, the fraction of edges connected to degree 2 variable nodes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative Decoding With Replicas

    Page(s): 1644 - 1663
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1162 KB) |  | HTML iconHTML  

    Replica shuffled versions of iterative decoders for low-density parity-check (LDPC) codes and turbo codes are presented. The proposed schemes can converge faster than standard and plain shuffled approaches. Two methods, density evolution and extrinsic information transfer (EXIT) charts, are used to analyze the performance of the proposed algorithms. Both theoretical analysis and simulations show that the new schedules offer good tradeoffs with respect to performance, complexity, latency, and connectivity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimax Universal Decoding With an Erasure Option

    Page(s): 1664 - 1675
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    Motivated by applications of rateless coding, decision feedback, and automatic repeat request (ARQ), we study the problem of universal decoding for unknown channels in the presence of an erasure option. Specifically, we harness the competitive minimax methodology developed in earlier studies, in order to derive a universal version of Forney's classical erasure/list decoder, which in the erasure case, optimally trades off between the probability of erasure and the probability of undetected error. The proposed universal erasure decoder guarantees universal achievability of a certain fraction xi of the optimum error exponents of these probabilities (in a sense to be made precise in the sequel). A single-letter expression for xi, which depends solely on the coding rate and the Neyman-Pearson threshold (to be defined), is provided. The example of the binary-symmetric channel is studied in full detail, and some conclusions are drawn View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constructions of Asymptotically Optimal Space–Frequency Codes for MIMO-OFDM Systems

    Page(s): 1676 - 1688
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (411 KB) |  | HTML iconHTML  

    Constructions of space-frequency (SF) codes for multiple-input multiple-output (MIMO)-orthogonal frequency-division multiplexing (OFDM) systems with nt transmit antennas and Q subcarriers are considered in this paper. Following the pairwise-error-probability analysis, it is known that in addition to the conventional rank distance criterion, the minimum column distance of (nttimesQ) SF codes serves as another benchmark in code design. SF codes with larger minimum column distance are expected to have better performance. Following this principle, the rate-diversity tradeoff for the MIMO-OFDM channels as well as two SF code constructions are presented. The first construction is obtained by right-multiplying the code matrices in a maximal rank-distance (MRD) code by a fixed (QtimesQ) nonsingular matrix. Codes obtained from this construction are called linearly transformed MRD (LT-MRD) codes. Minimum column distance of the LT-MRD codes, when averaged over all code ensembles, is shown to meet the Gilbert-Varshamov bound. For the case of constructing the (2times256) quadrature phase-shift keying (QPSK)-modulated SF codes, it is shown that the LT-MRD codes can provide a much larger minimum column distance at the value of ges50, compared to the values of 3,5, or 6 obtained by other available constructions. The second code construction, termed cyclotomic construction, is reminiscent of the construction of the Reed-Solomon codes except that the code polynomials are now selected according to the cyclotomic cosets of the underlying field. Exact minimum rank distances of the resultant codes are presented. It is shown that this newly constructed code is asymptotically optimal in terms of rate-diversity tradeoff. Bounds on the minimum column distance of these codes are also given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Golden Space–Time Trellis Coded Modulation

    Page(s): 1689 - 1705
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2408 KB) |  | HTML iconHTML  

    In this paper, we present a multidimensional trellis coded modulation scheme for a high rate 2times2 multiple-input multiple-output (MIMO) system over slow fading channels. Set partitioning of the Golden code is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions and Viterbi algorithm is applied for trellis decoding. In order to compute the branch metrics, a sphere decoder is used. The general framework for code design and optimization is given. Performance of the proposed scheme is evaluated by simulation and it is shown that it achieves significant performance gains over the uncoded Golden code View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-Time-Scale Approximation for Wonham Filters

    Page(s): 1706 - 1715
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (311 KB) |  | HTML iconHTML  

    This paper is concerned with approximation of Wonham filters. A focal point is that the underlying hidden Markov chain has a large state space. To reduce computational complexity, a two-time-scale approach is developed. Under time scale separation, the state space of the underlying Markov chain is divided into a number of groups such that the chain jumps rapidly within each group and switches occasionally from one group to another. Such structure gives rise to a limit Wonham filter that preserves the main features of the filtering process, but has a much smaller dimension and therefore is easier to compute. Using the limit filter enables us to develop efficient approximations and useful filters for hidden Markov chains. The main advantage of our approach is the reduction of dimensionality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Analysis and Code Design for Minimum Hamming Distance Fusion in Wireless Sensor Networks

    Page(s): 1716 - 1734
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (674 KB) |  | HTML iconHTML  

    Distributed classification fusion using error-correcting codes (DCFECC) has recently been proposed for wireless sensor networks operating in a harsh environment. It has been shown to have a considerably better capability against unexpected sensor faults than the optimal likelihood fusion. In this paper, we analyze the performance of a DCFECC code with minimum Hamming distance fusion. No assumption on identical distribution for local observations, as well as common marginal distribution for the additive noises of the wireless links, is made. In addition, sensors are allowed to employ their own local classification rules. Upper bounds on the probability of error that are valid for any finite number of sensors are derived based on large deviations technique. A necessary and sufficient condition under which the minimum Hamming distance fusion error vanishes as the number of sensors tends to infinity is also established. With the necessary and sufficient condition and the upper error bounds, the relation between the fault-tolerance capability of a DCFECC code and its pair-wise Hamming distances is characterized, and can be used together with any code search criterion in finding the code with the desired fault-tolerance capability. Based on the above results, we further propose a code search criterion of much less complexity than the minimum Hamming distance fusion error criterion adopted earlier by the authors. This makes the code construction with acceptable fault-tolerance capability for a network with over a hundred of sensors practical. Simulation results show that the code determined based on the new criterion of much less complexity performs almost identically to the best code that minimizes the minimum Hamming distance fusion error. Also simulated and discussed are the performance trends of the codes searched based on the new simpler criterion with respect to the network size and the number of hypotheses View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Rate of Convergence of Local Averaging Plug-In Classification Rules Under a Margin Condition

    Page(s): 1735 - 1742
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    The rates of convergence of plug-in kernel, partitioning, and nearest neighbors classification rules are analyzed. A margin condition, which measures how quickly the a posteriori probabilities cross the decision boundary, smoothness conditions on the a posteriori probabilities, and boundedness of the feature vector are imposed. The rates of convergence of the plug-in classifiers shown in this paper are faster than previously known View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Search for Boolean Functions With Excellent Profiles in the Rotation Symmetric Class

    Page(s): 1743 - 1751
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    For the first time Boolean functions on 9 variables having nonlinearity 241 are discovered, that remained as an open question in literature for almost three decades. Such functions are found by heuristic search in the space of rotation symmetric Boolean functions (RSBFs). This shows that there exist Boolean functions on n (odd) variables having nonlinearity >2n-1-2n-1/2 if and only if n>7. Using similar search technique, balanced Boolean functions on 9, 10, and 11 variables are attained having autocorrelation spectra with maximum absolute value <2lceiln/2rceil. On odd number of variables, earlier such functions were known for 15, 21 variables; there was no evidence of such functions at all on even number of variables. In certain cases, our functions can be affinely transformed to obtain first-order resiliency or first-order propagation characteristics. Moreover, 10 variable functions having first-order resiliency and nonlinearity 492 are presented that had been posed as an open question at Crypto 2000. The functions reported in this paper are discovered using a suitably modified steepest descent based iterative heuristic search in the RSBF class along with proper affine transformations. It seems elusive to get a construction technique to match such functions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Attack on the Filter Generator

    Page(s): 1752 - 1758
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (213 KB) |  | HTML iconHTML  

    The filter generator is an important building block in many stream ciphers. The generator consists of a linear feedback shift register of length n that generates an m-sequence of period 2n-1 filtered through a Boolean function of degree d that combines bits from the shift register and creates an output bit zt at any time t. The previous best attacks aimed at reconstructing the initial state from an observed keystream, have essentially reduced the problem to solving a nonlinear system of D=Sigmai=1 d(n/i) equations in n unknowns using techniques based on linear algebra. This attack needs about D bits of keystream and the system can be solved in complexity O(Domega), where omega can be taken to be Strassen's reduction exponent omega=log2(7)ap2.807. This paper describes a new algorithm that recovers the initial state of most filter generators after observing O(D) keystream bits with complexity O((D-n)/2)apO(D), after a pre-computation with complexity O(D(log2D)3) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • N -Sequence RSNS Ambiguity Analysis

    Page(s): 1759 - 1766
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (331 KB) |  | HTML iconHTML  

    The robust symmetrical number system (RSNS) is a modular system formed using Nges2 integer sequences and ensures that two successive RSNS vectors (paired terms from all N sequences) differ by only one integer. This integer Gray-code property reduces the possibility of encoding errors and makes the RSNS useful in applications such as folding analog-to-digital converters (ADCs), direction finding antenna architectures, and photonic processors. This paper determines the length of combined sequences that contain no vector ambiguities. This length or longest run of distinct vectors we call the RSNS dynamic range (Mcirc). The position of Mcirc which is the starting point in the sequence is also derived. Computing Mcirc and the position of Mcirc allows the integer Gray-code properties of the RSNS to be used in practical applications. We first extend our two-sequence results to develop a closed-form expression for Mcirc for a three-sequence RSNS with moduli of the form 2r-1,2r,2r+1. We then extend the results to solving the N-sequence RSNS ambiguity locations in general View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resource Allocation and Quality of Service Evaluation for Wireless Communication Systems Using Fluid Models

    Page(s): 1767 - 1777
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    Wireless systems offer a unique mixture of connectivity, flexibility, and freedom. It is therefore not surprising that wireless technology is being embraced with increasing vigor. For real-time applications, user satisfaction is closely linked to quantities such as queue length, packet loss probability, and delay. System performance is therefore related to, not only Shannon capacity, but also quality of service (QoS) requirements. This work studies the problem of resource allocation in the context of stringent QoS constraints. The joint impact of spectral bandwidth, power, and code rate is considered. Analytical expressions for the probability of buffer overflow, its associated exponential decay rate, and the effective capacity are obtained. Fundamental performance limits for Markov wireless channel models are identified. It is found that, even with an unlimited power and spectral bandwidth budget, only a finite arrival rate can be supported for a QoS constraint defined in terms of exponential decay rate View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recursive Constructions of Parallel FIFO and LIFO Queues With Switched Delay Lines

    Page(s): 1778 - 1798
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (479 KB) |  | HTML iconHTML  

    One of the most popular approaches for the constructions of optical buffers needed for optical packet switching is to use switched delay lines (SDL). Recent advances in the literature have shown that there exist systematic SDL construction theories for various types of optical buffers, including first-in first-out (FIFO) multiplexers, FIFO queues, priority queues, linear compressors, nonovertaking delay lines, and flexible delay lines. As parallel FIFO queues with a shared buffer are widely used in many switch architectures, e.g., input-buffered switches and load-balanced Birkhoff-von Neumann switches, in this paper we propose a new SDL construction for such queues. The key idea of our construction for parallel FIFO queues with a shared buffer is two-level caching, where we construct a dual-port random request queue in the upper level (as a high switching speed storage device) and a system of scaled parallel FIFO queues with a shared buffer in the lower level (as a low switching speed storage device). By determining appropriate dumping thresholds and retrieving thresholds, we prove that the two-level cache can be operated as a system of parallel FIFO queues with a shared buffer. Moreover, such a two-level construction can be recursively expanded to an n-level construction, where we show that the number of 2times2 switches needed to construct a system of N parallel FIFO queues with a shared buffer B is O((NlogN)log(B/(NlogN))), for NGt1. For the case with N=1, i.e., a single FIFO queue with buffer B, the number of 2times2 switches needed is O(logB). This is of the same order as that previously obtained by Chang We also show that our two-level recursive construction can be extended to construct a system of N parallel last-in first-out (LIFO) queues with a shared buffer by using the same number of 2times2 switches, i.e., O((NlogN)log(B/(NlogN))), for NGt1 and O(logB) for N=1. Finally, we show that a great advantage of our construction is its fault tolerant capability. - - The reliability of our construction can be increased by simply adding extra optical memory cells (the basic elements in our construction) in each level so that our construction still works even when some of the optical memory cells do not function properly View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error Resilient LZ'77 Data Compression: Algorithms, Analysis, and Experiments

    Page(s): 1799 - 1813
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (554 KB) |  | HTML iconHTML  

    We propose a joint source-channel coding algorithm capable of correcting some errors in the popular Lempel-Ziv'77 (LZ'77) scheme without introducing any measurable degradation in the compression performance. This can be achieved because the LZ'77 encoder does not completely eliminate the redundancy present in the input sequence. One source of redundancy can be observed when an LZ'77 phrase has multiple matches. In this case, LZ'77 can issue a pointer to any of those matches, and a particular choice carries some additional bits of information. We call a scheme with embedded redundant information the LZS'77 algorithm. We analyze the number of longest matches in such a scheme and prove that it follows the logarithmic series distribution with mean 1/h (plus some fluctuations), where h is the source entropy. Thus, the distribution associated with the number of redundant bits is well concentrated around its mean, a highly desirable property for error correction. These analytic results are proved by a combination of combinatorial, probabilistic, and analytic methods (e.g., Mellin transform, depoissonization, combinatorics on words). In fact, we analyze LZS'77 by studying the multiplicity matching parameter in a suffix tree, which in turn is analyzed via comparison to its independent version, called trie. Finally, we present an algorithm in which a channel coder (e.g., Reed-Solomon (RS) coder) succinctly uses the inherent additional redundancy left by the LZS'77 encoder to detect and correct a limited number of errors. We call such a scheme the LZRS'77 algorithm. LZRS'77 is perfectly backward-compatible with LZ'77, that is, a file compressed with our error-resistant LZRS'77 can still be decompressed by a generic LZ'77 decoder View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Carbon Copying Onto Dirty Paper

    Page(s): 1814 - 1827
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (591 KB) |  | HTML iconHTML  

    A generalization of the problem of writing on dirty paper is considered in which one transmitter sends a common message to multiple receivers. Each receiver experiences on its link an additive interference (in addition to the additive noise), which is known noncausally to the transmitter but not to any of the receivers. Applications range from wireless multiple-antenna multicasting to robust dirty paper coding. We develop results for memoryless channels in Gaussian and binary special cases. In most cases, we observe that the availability of side information at the transmitter increases capacity relative to systems without such side information, and that the lack of side information at the receivers decreases capacity relative to systems with such side information. For the noiseless binary case, we establish the capacity when there are two receivers. When there are many receivers, we show that the transmitter side information provides a vanishingly small benefit. When the interference is large and independent across the users, we show that time sharing is optimal. For the Gaussian case, we present a coding scheme and establish its optimality in the high signal-to-interference-plus-noise limit when there are two receivers. When the interference power is large and independent across all the receivers, we show that time-sharing is again optimal. Connections to the problem of robust dirty paper coding are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Achievable Error Exponents for the Private Fingerprinting Game

    Page(s): 1827 - 1838
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    Fingerprinting systems in the presence of collusive attacks are analyzed as a game between a fingerprinter and a decoder on the one hand, and a coalition of two or more attackers on the other hand. The fingerprinter distributes, to different users, different fingerprinted copies of a host data (covertext), drawn from a memoryless stationary source, embedded with different fingerprints. The coalition members create a forgery of the data while aiming at erasing the fingerprints in order not to be detected. Their action is modeled by a multiple-access channel (MAC). We analyze the performance of two classes of decoders, associated with different kinds of error events. The decoder of the first class aims at detecting the entire coalition, whereas the second is satisfied with the detection of at least one member of the coalition. Both decoders have access to the original covertext data and observe the forgery in order to identify member(s) of the coalition. Motivated by a worst case approach, we assume that the coalition of attackers is informed of the hiding strategy taken by the fingerprinter and the decoder, while they are uninformed of the attacking scheme. Achievable single-letter expressions for the two kinds of error exponents are obtained. Single-letter lower bounds are also derived for the subclass of constant composition codes. These lower and the upper bounds coincide for the error exponent of the first class. Further, for the error of the first kind, a decoder that is optimal is introduced, and the worst case attack channel is characterized View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Extremal Inequality Motivated by Multiterminal Information-Theoretic Problems

    Page(s): 1839 - 1851
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (539 KB) |  | HTML iconHTML  

    We prove a new extremal inequality, motivated by the vector Gaussian broadcast channel and the distributed source coding with a single quadratic distortion constraint problems. As a corollary, this inequality yields a generalization of the classical entropy-power inequality (EPI). As another corollary, this inequality sheds insight into maximizing the differential entropy of the sum of two dependent random variables View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Performance of a Two-User MIMO Downlink System in Heavy Traffic

    Page(s): 1851 - 1859
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    A multiple-input multiple-output (MIMO) downlink system in which data is transmitted to two users over a common wireless channel is considered. The channel is assumed to be fixed for all transmissions over the period of interest and the ratio of anticipated average arrival rates for the two users, also known as the relative traffic rate, is the system design parameter. A packet-based traffic model is considered where data for each user is queued at the transmit end. A queueing analog for this system leads to a coupled queueing system for which a simple policy is known to be throughput-optimal under Markovian assumptions. Since an exact expression for the performance is not available, as a measure of performance (in heavy traffic), a diffusion approximation is established. This diffusion process is a two-dimensional (2-D) Semimartingale Reflecting Brownian Motion (SRBM) living in the positive quadrant of 2-D space View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Context-Tree Prediction of Individual Sequences

    Page(s): 1860 - 1866
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    Motivated by the evident success of context-tree based methods in lossless data compression, we explore, in this correspondence, methods of the same spirit in universal prediction of individual sequences. By context-tree prediction, we refer to a family of prediction schemes, where at each time instant t, after having observed all outcomes of the data sequence x1,...,xt-1, but not yet xt , the prediction is based on a "context" (or a state) that consists of the k most recent past outcomes xt-k,...,xt-1, where the choice of k may depend on the contents of a possibly longer, though limited, portion of the observed past, xt-kmax,...,xt-1. This is different from the study reported in the paper by Feder, Merhav, and Gutman (1992), where general finite-state predictors as well as "Markov" (finite-memory) predictors of fixed order, were studied in the regime of individual sequences. Another important difference between this study and the work of Feder is the asymptotic regime. While in their work, the resources of the predictor (i.e., the number of states or the memory size) were kept fixed regardless of the length N of the data sequence, here we investigate situations where the number of contexts, or states, is allowed to grow concurrently with N. We are primarily interested in the following fundamental question: What is the critical growth rate of the number of contexts, below which the performance of the best context-tree predictor is still universally achievable, but above which it is not? We show that this critical growth rate is linear in N. In particular, we propose a universal context-tree algorithm that essentially achieves optimum performance as long as the growth rate is sublinear, and show that, on the other hand, this is impossible in the linear case View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sequential Prediction of Unbounded Stationary Time Series

    Page(s): 1866 - 1872
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (313 KB) |  | HTML iconHTML  

    A simple on-line procedure is considered for the prediction of a real-valued sequence. The algorithm is based on a combination of several simple predictors. If the sequence is a realization of an unbounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. An analog result is offered for the classification of binary processes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonparametric Estimation of Conditional Distributions

    Page(s): 1872 - 1879
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    Estimation of conditional distributions is considered. It is assumed that the conditional distribution is either discrete or that it has a density with respect to the Lebesgue measure. Partitioning estimates of the conditional distribution are constructed and results concerning consistency and rate of convergence of the integrated total variation error of the estimates are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decorrelation of Wavelet Coefficients for Long-Range Dependent Processes

    Page(s): 1879 - 1883
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    We consider a discrete-time stationary long-range dependent process (Xk)kisinZ such that its spectral density equals phi(|lambda|)-2d, where phi is a smooth function such that phi(0)=phi''(0)=0 and phi(lambda)gesclambda for lambdaisin[0,pi]. Then for any wavelet psi with N vanishing moments, the lag k within-level covariance of wavelet coefficients decays as O(k2d-2N-1) when krarrinfin. The result applies to fractionally integrated autoregressive moving average (ARMA) processes as well as to fractional Gaussian noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Bounds on the Expected Length of Optimal One-to-One Codes

    Page(s): 1884 - 1895
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (534 KB) |  | HTML iconHTML  

    In this correspondence, we consider one-to-one encodings for a discrete memoryless source, which are "one-shot" encodings associating a distinct codeword with each source symbol. Such encodings could be employed when only a single source symbol rather than a sequence of source symbols needs to be transmitted. For example, such a situation can arise when the last message must be acknowledged before the next message can be transmitted. We consider two slightly different types of one-to-one encodings (depending on whether the empty codeword is used or not) and obtain lower and upper bounds on the expected length of optimal one-to-one codes. We first give an extension of a known tight lower bound on the expected length of optimal one-to-one codes for the case that the the size of the source alphabet is finite and partial information about the source symbol probabilities is available. As expected, our lower bound is no less than the previously known lower bound obtained without side information about the source symbol probabilities. We then consider the case that the source entropy is available and derive arbitrarily tight lower bounds on the expected length of optimal one-to-one codes. We also derive arbitrarily tight lower bounds for the case that the source entropy and the probability of the most likely source symbol are available. Finally, given that the probability of the most likely source symbol is available, we obtain an upper bound on the expected length of optimal one-to-one codes. Our upper bound is tighter than the best upper bound known in the literature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering