By Topic

Data Compression Conference, 2005. Proceedings. DCC 2005

Date 29-31 March 2005

Filter Results

Displaying Results 1 - 25 of 96
  • Proceedings. DCC 2005. Data Compression Conference

    Publication Year: 2005
    Request permission for commercial reuse | PDF file iconPDF (49 KB)
    Freely Available from IEEE
  • [Title page]

    Publication Year: 2005, Page(s):i - iv
    Request permission for commercial reuse | PDF file iconPDF (94 KB)
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2005, Page(s):v - xiii
    Request permission for commercial reuse | PDF file iconPDF (67 KB)
    Freely Available from IEEE
  • Near tightness of the El Gamal and Cover region for two descriptions

    Publication Year: 2005, Page(s):3 - 12
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (168 KB) | HTML iconHTML

    We give a single letter outer bound for the two descriptions problem for iid sources that is universally close to the El Gamal and Cover (EGC) inner bound. The gaps in the quadratic distortion case for the sum and individual rates are upper bounded by 1.5 and 0.5 bits/sample, respectively. These constant bounds are universal with respect to the source being encoded, provided that its variance is f... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed source coding in dense sensor networks

    Publication Year: 2005, Page(s):13 - 22
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (408 KB) | HTML iconHTML

    We study the problem of the reconstruction of a Gaussian field defined in [0,1] using N sensors deployed at regular intervals. The goal is to quantify the total data rate required for the reconstruction of the field with a given mean square distortion. We consider a class of two-stage mechanisms which (a) send information to allow the reconstruction of the sensor's samples within sufficient accura... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization of the rate-distortion function for Wyner-Ziv coding of noisy sources in the quadratic-Gaussian case

    Publication Year: 2005, Page(s):23 - 32
    Cited by:  Papers (7)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (200 KB) | HTML iconHTML

    We extend the rate-distortion function for Wyner-Ziv coding of noisy sources with quadratic distortion, in the jointly Gaussian case, to more general statistics. It suffices that the noisy observation Z be the sum of a function of the side information Y and independent Gaussian noise, while the source data X must be the sum of a function of Y, a linear function of Z, and a random variable N such t... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards practical minimum-entropy universal decoding

    Publication Year: 2005, Page(s):33 - 42
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (200 KB) | HTML iconHTML

    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On multiterminal source code design

    Publication Year: 2005, Page(s):43 - 52
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (232 KB) | HTML iconHTML

    Multiterminal (MT) source coding refers to separate lossy encoding and joint decoding of multiple correlated sources. This paper presents two practical MT coding schemes under the same general framework of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect quadratic Gaussian MT source coding problems with two encoders. The first asymmetric SWCQ scheme relies on quantization and Wy... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the performance of linear Slepian-Wolf codes for correlated stationary memoryless sources

    Publication Year: 2005, Page(s):53 - 62
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (216 KB) | HTML iconHTML

    We derive an upper bound on the average MAP decoding error probability of random linear SW codes for arbitrary correlated stationary memoryless sources defined on Galois fields. By using this tool, we analyze the performance of SW codes based on LDPC codes and random permutations, and show that under some conditions, all but a diminishingly small proportion of LDPC encoders and permutations are go... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real, tight frames with maximal robustness to erasures

    Publication Year: 2005, Page(s):63 - 72
    Cited by:  Papers (14)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (168 KB) | HTML iconHTML

    Motivated by the use of frames for robust transmission over the Internet, we present a first systematic construction of real tight frames with maximum robustness to erasures. We approach the problem in steps: we first construct maximally robust frames by using polynomial transforms. We then add tightness as an additional property with the help of orthogonal polynomials. Finally, we impose the last... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive block-based image coding with pre-/post-filtering

    Publication Year: 2005, Page(s):73 - 82
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (656 KB) | HTML iconHTML

    This paper presents an adaptive block-based image coding method, which combines the advantages of variable block size transform and adaptive pre-/post-filtering scheme. Our approach partitions an image into blocks with different sizes, which are best suitable for the characteristics of the underlying data in the rate-distortion (RD) sense. The adaptive block decomposition mitigates the ringing art... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimized prediction for geometry compression of triangle meshes

    Publication Year: 2005, Page(s):83 - 92
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (288 KB) | HTML iconHTML

    In this paper we propose a novel geometry compression technique for 3D triangle meshes. We focus on a commonly used technique for predicting vertex positions via a flipping operation using the parallelogram rule. We show that the efficiency of the flipping operation is dependent on the order in which triangles are traversed and vertices are predicted accordingly. We formulate the problem of optima... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TetStreamer: compressed back-to-front transmission of Delaunay tetrahedra meshes

    Publication Year: 2005, Page(s):93 - 102
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (400 KB) | HTML iconHTML

    We use the abbreviations tet and tri for tetrahedron and triangle. TetStreamer encodes a Delaunay tet mesh in a back-to-front visibility order and streams it from a server to a client (volumetric visualizer). During decompression, the server performs the view-dependent back-to-front sorting of the tets by identifying and deactivating one free tet at a time. A tet is free when all its back faces ar... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A point-set compression heuristic for fiber-based certificates of authenticity

    Publication Year: 2005, Page(s):103 - 112
    Cited by:  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (776 KB) | HTML iconHTML

    A certificate of authenticity (COA) is an inexpensive physical object that has a random unique structure with high cost of near-exact reproduction. An additional requirement is that the uniqueness of COA's random structure can be verified using an inexpensive device. Bauder was the first to propose COA created as a randomized augmentation of a set of fixed-length fibers into a transparent gluing m... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance comparison of path matching algorithms over compressed control flow traces

    Publication Year: 2005, Page(s):113 - 122
    Cited by:  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (216 KB) | HTML iconHTML

    A control flow trace captures the complete sequence of dynamically executed basic blocks and function calls. It is usually stored in compressed form due to its large size. Matching an intraprocedural path in a control flow trace faces path interruption and path context problems and therefore requires the extension of traditional pattern matching algorithms. In this paper we evaluate different path... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation cost of the Huffman-Shannon-Fano code

    Publication Year: 2005, Page(s):123 - 132
    Cited by:  Papers (1)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (360 KB) | HTML iconHTML

    An efficient implementation of a Huffman code can be based on the Shannon-Fano construction. An important question is exactly how complex is such an implementation. In the past authors have considered this question assuming an ordered source symbol alphabet. In the case of the compression of blocks of binary symbols this ordering must be performed explicitly and it turns out to be the complexity b... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary codes for non-uniform sources

    Publication Year: 2005, Page(s):133 - 142
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (176 KB) | HTML iconHTML

    In many applications of compression, decoding speed is at least as important as compression effectiveness. For example, the large inverted indexes associated with text retrieval mechanisms are best stored compressed, but a working system must also process queries at high speed. Here we present two coding methods that make use of fixed binary representations. They have all of the consequent benefit... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast decoding of prefix encoded texts

    Publication Year: 2005, Page(s):143 - 152
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (280 KB) | HTML iconHTML

    New variants of partial decoding tables are presented that can be used to accelerate the decoding of texts compressed by any prefix code, such as Huffman's. They are motivated by a variety of tradeoffs between decompression speed and required auxiliary space, and apply to any shape of the tree, not only the canonical one. Performance is evaluated both analytically and by experiments, showing that ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient string matching algorithms for combinatorial universal denoising

    Publication Year: 2005, Page(s):153 - 162
    Cited by:  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (208 KB) | HTML iconHTML

    Inspired by the combinatorial denoising method DUDE, we present efficient algorithms for implementing this idea for arbitrary contexts or for using it within subsequences. We also propose effective, efficient denoising error estimators so we can find the best denoising of an input sequence over different context lengths. Our methods are simple, drawing from string matching methods and radix sortin... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalizing the Kraft-McMillan inequality to restricted languages

    Publication Year: 2005, Page(s):163 - 172
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (216 KB) | HTML iconHTML

    Let ℓ 1,ℓ 2,...,ℓ n be a (possibly infinite) sequence of nonnegative integers and Σ some D-ary alphabet. The Kraft-inequality states that ℓ 1,ℓ 2,...,ℓ n are the lengths of the words in some prefix (free) code over Σ if and only if Σi=1nD... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotics of the entropy rate for a hidden Markov process

    Publication Year: 2005, Page(s):173 - 182
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (184 KB) | HTML iconHTML

    We calculate the Shannon entropy rate of a binary hidden Markov process (HMP), of given transition rate and noise ε (emission), as a series expansion in ε. The first two orders are calculated exactly. We then evaluate, for finite histories, simple upper-bounds of Cover and Thomas. Surprisingly, we find that for a fixed order k and history of n steps, the bounds become independent of n fo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient alphabet partitioning algorithms for low-complexity entropy coding

    Publication Year: 2005, Page(s):183 - 192
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (184 KB) | HTML iconHTML

    We analyze the technique for reducing the complexity of entropy coding consisting of the a priori grouping of the source alphabet symbols, and in dividing the coding process in two stages: first coding the number of the symbol's group with a more complex method, followed by coding the symbol's rank inside its group using a less complex method, or simply using its binary representation. Because thi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of VQ-based hybrid digital-analog joint source-channel codes for image communication

    Publication Year: 2005, Page(s):193 - 202
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (384 KB) | HTML iconHTML

    A joint source-channel coding system for image communication over an additive white Gaussian noise channel is presented. It employs vector quantization based hybrid digital-analog modulation techniques with bandwidth compression and expansion for transmitting and reconstructing the wavelet coefficients of an image. The main advantage of the proposed system is that it achieves good performance at t... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hard decision and iterative joint source channel coding using arithmetic codes

    Publication Year: 2005, Page(s):203 - 212
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (184 KB) | HTML iconHTML

    Current proposals for using arithmetic coding in a joint source/channel coding framework require 'soft' information to provide error correction. However, in many applications only the binary arithmetic coded output is available at the decoder. We propose a hard decision technique that uses only the information in the bitstream to provide error correction. Where soft information is available this d... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint source and channel coding using trellis coded CPM: soft decoding

    Publication Year: 2005, Page(s):213 - 222
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (216 KB) | HTML iconHTML

    Joint source and channel (JSC) coding using combined trellis coded quantization (TCQ) and continuous phase modulation (CPM) is studied. The channel is assumed to be the additive white Gaussian noise (AWGN) channel. Optimal soft decoding for JSC coding using jointly designed TCQ/CPM is studied in this paper. The soft decoder is based on the a posteriori probability (APP) algorithm for trellis coded... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.