By Topic

Data Compression Conference, 2005. Proceedings. DCC 2005

Date 29-31 March 2005

Filter Results

Displaying Results 1 - 25 of 96
  • Proceedings. DCC 2005. Data Compression Conference

    Publication Year: 2005
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • [Title page]

    Publication Year: 2005 , Page(s): i - iv
    Save to Project icon | Request Permissions | PDF file iconPDF (94 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2005 , Page(s): v - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (67 KB)  
    Freely Available from IEEE
  • Near tightness of the El Gamal and Cover region for two descriptions

    Publication Year: 2005 , Page(s): 3 - 12
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    We give a single letter outer bound for the two descriptions problem for iid sources that is universally close to the El Gamal and Cover (EGC) inner bound. The gaps in the quadratic distortion case for the sum and individual rates are upper bounded by 1.5 and 0.5 bits/sample, respectively. These constant bounds are universal with respect to the source being encoded, provided that its variance is finite. They are also universal with respect to the desired distortion levels, under the assumption that, after normalizing the source to have unit variance, Di ∈ (0,1) for i ∈ {0,1,2} and D0 ≤ (D1-1 + D2-1 - 1)-1. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed source coding in dense sensor networks

    Publication Year: 2005 , Page(s): 13 - 22
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    We study the problem of the reconstruction of a Gaussian field defined in [0,1] using N sensors deployed at regular intervals. The goal is to quantify the total data rate required for the reconstruction of the field with a given mean square distortion. We consider a class of two-stage mechanisms which (a) send information to allow the reconstruction of the sensor's samples within sufficient accuracy, and then (b) use these reconstructions to estimate the entire field. To implement the first stage, the heavy correlation between the sensor samples suggests the use of distributed coding schemes to reduce the total rate. Our main contribution is to demonstrate the existence of a distributed block coding scheme that achieves, for a given fidelity criterion for the sensor's measurements, a total information rate that is within a constant, independent of N, of the minimum information rate required by an encoder that has access to all the sensor measurements simultaneously. The constant in general depends on the autocorrelation function of the field and the desired distortion criterion for the sensor samples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalization of the rate-distortion function for Wyner-Ziv coding of noisy sources in the quadratic-Gaussian case

    Publication Year: 2005 , Page(s): 23 - 32
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    We extend the rate-distortion function for Wyner-Ziv coding of noisy sources with quadratic distortion, in the jointly Gaussian case, to more general statistics. It suffices that the noisy observation Z be the sum of a function of the side information Y and independent Gaussian noise, while the source data X must be the sum of a function of Y, a linear function of Z, and a random variable N such that the conditional expectation of N given Y and Z is zero, almost surely. Furthermore, the side information Y may be arbitrarily distributed in any alphabet, discrete or continuous. Under these general conditions, we prove that no rate loss is incurred due to the unavailability of the side information at the encoder. In the noiseless Wyner-Ziv case, i.e., when the source data is directly observed, the assumptions are still less restrictive than those recently established in the literature. We confirm, theoretically and experimentally, the consistency of this analysis with some of the main results on high-rate Wyner-Ziv quantization of noisy sources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards practical minimum-entropy universal decoding

    Publication Year: 2005 , Page(s): 33 - 42
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On multiterminal source code design

    Publication Year: 2005 , Page(s): 43 - 52
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB) |  | HTML iconHTML  

    Multiterminal (MT) source coding refers to separate lossy encoding and joint decoding of multiple correlated sources. This paper presents two practical MT coding schemes under the same general framework of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect quadratic Gaussian MT source coding problems with two encoders. The first asymmetric SWCQ scheme relies on quantization and Wyner-Ziv coding, and is implemented via source-splitting to achieve any point on the inner sum-rate bound for both direct and indirect MT coding problems. In the second symmetric SWCQ scheme, the two quantization outputs are compressed using multilevel symmetric Slepian-Wolf coding. This scheme is conceptually simpler and can potentially achieve most of the points on the inner sum-rate bound. Our practical designs employ trellis coded quantization, LDPC code based asymmetric Slepian-Wolf code, and arithmetic code and turbo code based symmetric Slepian-Wolf code. Simulation results show a gap of only 0.24-0.29 bit per sample away from the inner sum-rate bound for both direct and indirect MT coding problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the performance of linear Slepian-Wolf codes for correlated stationary memoryless sources

    Publication Year: 2005 , Page(s): 53 - 62
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    We derive an upper bound on the average MAP decoding error probability of random linear SW codes for arbitrary correlated stationary memoryless sources defined on Galois fields. By using this tool, we analyze the performance of SW codes based on LDPC codes and random permutations, and show that under some conditions, all but a diminishingly small proportion of LDPC encoders and permutations are good enough for the design of practical SW systems when the coding length is very large. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real, tight frames with maximal robustness to erasures

    Publication Year: 2005 , Page(s): 63 - 72
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    Motivated by the use of frames for robust transmission over the Internet, we present a first systematic construction of real tight frames with maximum robustness to erasures. We approach the problem in steps: we first construct maximally robust frames by using polynomial transforms. We then add tightness as an additional property with the help of orthogonal polynomials. Finally, we impose the last requirement of equal norm and construct, to our best knowledge, the first real, tight, equal-norm frames maximally robust to erasures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive block-based image coding with pre-/post-filtering

    Publication Year: 2005 , Page(s): 73 - 82
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB) |  | HTML iconHTML  

    This paper presents an adaptive block-based image coding method, which combines the advantages of variable block size transform and adaptive pre-/post-filtering scheme. Our approach partitions an image into blocks with different sizes, which are best suitable for the characteristics of the underlying data in the rate-distortion (RD) sense. The adaptive block decomposition mitigates the ringing artifacts by adopting a small block size transform in nonstationary regions, and improves the coding efficiency by using a large block size transform in homogenous regions. Moreover, pre-/post-filtering is adaptively applied along the block boundaries to improve coding efficiency and minimize blocking artifacts. Simulation results show that the proposed coder can achieve competitive objective performance as well as yield superior reconstruction visual quality, compared with the RD-optimized JPEG2000 and H.264/AVC I-frame coder. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimized prediction for geometry compression of triangle meshes

    Publication Year: 2005 , Page(s): 83 - 92
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    In this paper we propose a novel geometry compression technique for 3D triangle meshes. We focus on a commonly used technique for predicting vertex positions via a flipping operation using the parallelogram rule. We show that the efficiency of the flipping operation is dependent on the order in which triangles are traversed and vertices are predicted accordingly. We formulate the problem of optimally (traversing triangles and) predicting the vertices via flippings as a combinatorial optimization problem of constructing a constrained minimum spanning tree. We give heuristic solutions for this problem and show that we can achieve prediction efficiency within 17.4% on average as compared to the unconstrained minimum spanning tree which is an unachievable lower bound. We also show significant improvements over previous techniques in the literature that strive to find good traversals that also attempt to minimize prediction errors obtained by a sequence of flipping operations, albeit using a different approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TetStreamer: compressed back-to-front transmission of Delaunay tetrahedra meshes

    Publication Year: 2005 , Page(s): 93 - 102
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    We use the abbreviations tet and tri for tetrahedron and triangle. TetStreamer encodes a Delaunay tet mesh in a back-to-front visibility order and streams it from a server to a client (volumetric visualizer). During decompression, the server performs the view-dependent back-to-front sorting of the tets by identifying and deactivating one free tet at a time. A tet is free when all its back faces are on the sheet. The sheet is a tri mesh separating active and inactive tets. It is initialized with the back-facing boundary of the mesh. It is compressed using EdgeBreaker and transmitted first. It is maintained by both the server and the client and advanced towards the viewer passing one free tet at a time. The client receives a compressed bit stream indicating where to attach free tets to the sheet. It renders each free tet and updates the sheet by either flipping a concave edge, removing a concave valence-3 vertex, or inserting a new vertex to split a tri. TetStreamer compresses the connectivity of the whole let mesh to an average of about 1.7 bits per tet. The footprint (in-core memory required by the client) needs only to hold the evolving sheet, which is a small fraction of the storage that would be required by the entire tet-mesh. Hence, TetStreamer permits us to receive, decompress, and visualize or process very large meshes on clients with a small in-core memory. Furthermore, it permits us to use volumetric visualization techniques, which require that the mesh be processed in view-dependent back-to-front order, at no extra memory, performance or transmission cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A point-set compression heuristic for fiber-based certificates of authenticity

    Publication Year: 2005 , Page(s): 103 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB) |  | HTML iconHTML  

    A certificate of authenticity (COA) is an inexpensive physical object that has a random unique structure with high cost of near-exact reproduction. An additional requirement is that the uniqueness of COA's random structure can be verified using an inexpensive device. Bauder was the first to propose COA created as a randomized augmentation of a set of fixed-length fibers into a transparent gluing material that randomly fixes once for all the position of the fibers within. Recently, Kirovski (2004) showed that linear improvement in the compression ratio of a point-set compression algorithm used to store fibers' locations, yields exponential increase in the cost of forging a fiber-based COA instance. To address this issue, in this paper, we introduce a novel, generalized heuristic that compresses M points in an N-dimensional grid with computational complexity proportional to O(M2). We compare its performance with an expected lower bound. The heuristic can be used for numerous other applications such as storage of biometric patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance comparison of path matching algorithms over compressed control flow traces

    Publication Year: 2005 , Page(s): 113 - 122
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    A control flow trace captures the complete sequence of dynamically executed basic blocks and function calls. It is usually stored in compressed form due to its large size. Matching an intraprocedural path in a control flow trace faces path interruption and path context problems and therefore requires the extension of traditional pattern matching algorithms. In this paper we evaluate different path matching schemes including those matching in the compressed data directly and those matching after the decompression. We design simple indices for the compressed data and show that they can greatly improve the performance. Our experimental results show that these schemes are useful and can be adapted to environments with different hardware settings and path matching requests. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation cost of the Huffman-Shannon-Fano code

    Publication Year: 2005 , Page(s): 123 - 132
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    An efficient implementation of a Huffman code can be based on the Shannon-Fano construction. An important question is exactly how complex is such an implementation. In the past authors have considered this question assuming an ordered source symbol alphabet. In the case of the compression of blocks of binary symbols this ordering must be performed explicitly and it turns out to be the complexity bottleneck. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary codes for non-uniform sources

    Publication Year: 2005 , Page(s): 133 - 142
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    In many applications of compression, decoding speed is at least as important as compression effectiveness. For example, the large inverted indexes associated with text retrieval mechanisms are best stored compressed, but a working system must also process queries at high speed. Here we present two coding methods that make use of fixed binary representations. They have all of the consequent benefits in terms of decoding performance, but are also sensitive to localized variations in the source data, and in practice give excellent compression. The methods are validated by applying them to various test data, including the index of an 18 GB document collection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast decoding of prefix encoded texts

    Publication Year: 2005 , Page(s): 143 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB) |  | HTML iconHTML  

    New variants of partial decoding tables are presented that can be used to accelerate the decoding of texts compressed by any prefix code, such as Huffman's. They are motivated by a variety of tradeoffs between decompression speed and required auxiliary space, and apply to any shape of the tree, not only the canonical one. Performance is evaluated both analytically and by experiments, showing that the necessary tables can be reduced drastically, with hardly any loss in performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient string matching algorithms for combinatorial universal denoising

    Publication Year: 2005 , Page(s): 153 - 162
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    Inspired by the combinatorial denoising method DUDE, we present efficient algorithms for implementing this idea for arbitrary contexts or for using it within subsequences. We also propose effective, efficient denoising error estimators so we can find the best denoising of an input sequence over different context lengths. Our methods are simple, drawing from string matching methods and radix sorting. We also present experimental results of our proposed algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalizing the Kraft-McMillan inequality to restricted languages

    Publication Year: 2005 , Page(s): 163 - 172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    Let ℓ 1,ℓ 2,...,ℓ n be a (possibly infinite) sequence of nonnegative integers and Σ some D-ary alphabet. The Kraft-inequality states that ℓ 1,ℓ 2,...,ℓ n are the lengths of the words in some prefix (free) code over Σ if and only if Σi=1nD-ℓ i≤1. Furthermore, the code is exhaustive if and only if equality holds. The McMillan inequality states that if ℓ n are the lengths of the words in some uniquely decipherable code, then the same condition holds. In this paper we examine how the Kraft-McMillan inequality conditions for the existence of a prefix or uniquely decipherable code change when the code is not only required to be prefix but all of the codewords are restricted to belong to a given specific language L. For example, L might be all words that end in a particular pattern or, if Σ is binary, might be all words in which the number of zeros equals the number of ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotics of the entropy rate for a hidden Markov process

    Publication Year: 2005 , Page(s): 173 - 182
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    We calculate the Shannon entropy rate of a binary hidden Markov process (HMP), of given transition rate and noise ε (emission), as a series expansion in ε. The first two orders are calculated exactly. We then evaluate, for finite histories, simple upper-bounds of Cover and Thomas. Surprisingly, we find that for a fixed order k and history of n steps, the bounds become independent of n for large enough n. This observation is the basis of a conjecture, that the upper-bound obtained for n≥(k+3)/2 gives the exact entropy rate for any desired order k of ε. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient alphabet partitioning algorithms for low-complexity entropy coding

    Publication Year: 2005 , Page(s): 183 - 192
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    We analyze the technique for reducing the complexity of entropy coding consisting of the a priori grouping of the source alphabet symbols, and in dividing the coding process in two stages: first coding the number of the symbol's group with a more complex method, followed by coding the symbol's rank inside its group using a less complex method, or simply using its binary representation. Because this method proved to be quite effective it is widely used in practice, and is an important part in standards like MPEG and JPEG. However, a theory to fully exploit its effectiveness had not been sufficiently developed. In this work, we study methods for optimizing the alphabet decomposition, and prove that a necessary optimality condition eliminates most of the possible solutions, and guarantees that dynamic programming solutions are optimal. In addition, we show that the data used for optimization have useful mathematical properties, which greatly reduce the complexity of finding optimal partitions. Finally, we extend the analysis, and propose efficient algorithms, for finding min-max optimal partitions for multiple data sources. Numerical results show the difference in redundancy for single and multiple sources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of VQ-based hybrid digital-analog joint source-channel codes for image communication

    Publication Year: 2005 , Page(s): 193 - 202
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    A joint source-channel coding system for image communication over an additive white Gaussian noise channel is presented. It employs vector quantization based hybrid digital-analog modulation techniques with bandwidth compression and expansion for transmitting and reconstructing the wavelet coefficients of an image. The main advantage of the proposed system is that it achieves good performance at the design channel signal-to-noise ratio (CSNR), while still maintaining a "graceful improvement" characteristic at higher CSNR. Comparisons are made with two purely digital systems and two purely analog systems. Simulation shows that the proposed system is superior to the other investigated systems for a wide range of CSNR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hard decision and iterative joint source channel coding using arithmetic codes

    Publication Year: 2005 , Page(s): 203 - 212
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    Current proposals for using arithmetic coding in a joint source/channel coding framework require 'soft' information to provide error correction. However, in many applications only the binary arithmetic coded output is available at the decoder. We propose a hard decision technique that uses only the information in the bitstream to provide error correction. Where soft information is available this decoder can also be used to substantially enhance the performance of any soft decision decoders by using the two decoders in an iterative fashion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint source and channel coding using trellis coded CPM: soft decoding

    Publication Year: 2005 , Page(s): 213 - 222
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    Joint source and channel (JSC) coding using combined trellis coded quantization (TCQ) and continuous phase modulation (CPM) is studied. The channel is assumed to be the additive white Gaussian noise (AWGN) channel. Optimal soft decoding for JSC coding using jointly designed TCQ/CPM is studied in this paper. The soft decoder is based on the a posteriori probability (APP) algorithm for trellis coded CPM. It is shown that the systems with soft decoding outperform the systems with hard decoding especially when the systems operate at low to medium signal-to-noise ratio (SNR). Furthermore, a TCQ design algorithm for the noisy channel is developed. It has been demonstrated that the combined TCQ/CPM systems are both power and bandwidth efficient compared with the combined TCQ/TCM/8PSK systems. The novelty of this work is the use of a soft decoder and the APP algorithm for combined TCQ/CPM systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.