By Topic

Data Compression Conference, 2007. DCC '07

Date 27-29 March 2007

Filter Results

Displaying Results 1 - 25 of 80
  • Data Compression Conference - Cover

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (58 KB)  
    Freely Available from IEEE
  • Data Compression Conference - Title

    Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Data Compression Conference - Copyright

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • Data Compression Conference - Table of contents

    Page(s): v - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (55 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • The Capocelli Prize

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • A Stochastic Model for Video and its Information Rates

    Page(s): 3 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    We propose a stochastic model for video and compute its information rates. The model has two sources of information representing ensembles of camera motion and visual scene data (i.e. "realities"). The sources of information are combined generating a vector process that we study in detail. Both lossless and lossy information rates are derived. The model is further extended to account for realities that change over time. We derive bounds on the lossless and lossy information rates for this dynamic reality model, stating conditions under which the bounds are tight. Experiments with synthetic sources suggest that in the presence of scene motion, simple hybrid coding using motion estimation with DPCM can be suboptimal relative to the true rate-distortion bound View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms

    Page(s): 13 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (139 KB) |  | HTML iconHTML  

    Motion-compensated lifted wavelets have received much interest for video compression. While they are biorthogonal, they may substantially deviate from orthonormality due to motion compensation, even if based on an orthogonal or near-orthogonal wavelet. A temporal transform for video sequences that maintains orthonormality while permitting flexible motion compensation would be very desirable. We have recently introduced such a transform for integer-pel accurate motion compensation from one previous frame. In this paper, we extend this idea to half-pel accurate motion compensation. Orthonormality is maintained for arbitrary half-pel motion compensation by cascading a sequence of incremental orthogonal transforms. The half-pel intensity values are obtained by averaging neighboring integer-pel positions. Depending on the number of averaged integer-pel values, we use different types of incremental transforms. The cascade of incremental transforms allows us to choose in each step the optimal type of incremental transform and, hence, the optimal half-pel position. Half-pel motion-compensated blocks of arbitrary shape and size can be used as the granularity of the cascade can be as small as one pixel. The new half-pel accurate motion-compensated orthogonal video transform compares favorably with the integer-pel accurate orthogonal transform View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial Sparsity Induced Temporal Prediction for Hybrid Video Compression

    Page(s): 23 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1463 KB) |  | HTML iconHTML  

    In this paper we propose a new motion compensated prediction technique that enables successful predictive encoding during fades, blended scenes, temporally decorrelated noise, and many other temporal evolutions which force predictors used in traditional hybrid video coders to fail. We model reference frame blocks to be used in motion compensated prediction as consisting of two superimposed parts: one part that is relevant for prediction and another part that is not relevant. By performing prediction in a domain where the video frames are spatially sparse, our work allows the automatic isolation of the prediction-relevant parts. These are then used to enable better prediction than would be possible otherwise. Our sparsity induced prediction algorithm (SIP) generates successful predictors by exploiting the non-convex structure of the sets that natural images and video frames lie in. Correctly determining this non-convexity through sparse representations allows better performance in hybrid video codecs equipped with the proposed work View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Normalized maximum likelihood model of order-1 for the compression of DNA sequences

    Page(s): 33 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB) |  | HTML iconHTML  

    We present the NML model for classes of models with memory described by first order dependencies. The model is used for efficiently locating and encoding the best regressor present in a dictionary. By combining the order-1 NML with the order-0 NML model the resulting algorithm achieves a consistent improvement over the earlier order-0 NML algorithm, and it is demonstrated to have superior performance on the practical compression of the human genome View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Simple Statistical Algorithm for Biological Sequence Compression

    Page(s): 43 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (211 KB) |  | HTML iconHTML  

    This paper introduces a novel algorithm for biological sequence compression that makes use of both statistical properties and repetition within sequences. A panel of experts is maintained to estimate the probability distribution of the next symbol in the sequence to be encoded. Expert probabilities are combined to obtain the final distribution. The resulting information sequence provides insight for further study of the biological sequence. Each symbol is then encoded by arithmetic coding. Experiments show that our algorithm outperforms existing compressors on typical DNA and protein sequence datasets while maintaining a practical running time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structure induction by lossless graph compression

    Page(s): 53 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    This work is motivated by the necessity to automate the discovery of structure in vast and ever-growing collection of relational data commonly represented as graphs, for example genomic networks. A novel algorithm, dubbed Graphitour, for structure induction by lossless graph compression is presented and illustrated by a clear and broadly known case of nested structure in a DNA molecule. This work extends to graphs some well established approaches to grammatical inference previously applied only to strings. The bottom-up graph compression problem is related to the maximum cardinality (non-bipartite) maximum cardinality matching problem. The algorithm accepts a variety of graph types including directed graphs and graphs with labeled nodes and arcs. The resulting structure could be used for representation and classification of graphs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Page(s): 63 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (230 KB) |  | HTML iconHTML  

    In this paper we address the connection between the multiple-description (MD) problem and delta-sigma quantization. Specifically, we exploit the inherent redundancy due to oversampling in delta-sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, in order to construct a symmetric MD coding scheme. We show that the use of feedback by means of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, we show that the symmetric two-channel MD rate-distortion function for the memoryless Gaussian source and MSE fidelity criterion can be achieved at any resolution. This realization provides a new interesting interpretation for the information theoretic solution. The proposed design is symmetric in rate by construction and there is therefore no need for source splitting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple Description Coding for Stationary and Ergodic Sources

    Page(s): 73 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB) |  | HTML iconHTML  

    We consider the problem of multiple description (MD) coding for stationary sources with the squared error distortion measure. The MD rate region is derived for the stationary and ergodic Gaussian sources, and is shown to be achievable with a practical transform lattice quantization scheme. Moreover, the proposed scheme is asymptotically optimal at high resolution for all stationary sources with finite differential entropy rate View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lossless Transmission of Correlated Sources over a Multiple Access Channel with Side Information

    Page(s): 83 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    In this paper, we consider lossless transmission of arbitrarily correlated sources over a multiple access channel. Characterization of the achievable rates in the most general setting is one of the long-standing open problems of information theory. We consider a special case of this problem where the receiver has access to correlated side information given which the sources are independent. We prove a source channel separation theorem for this system, that is, we show that there is no loss in performance in first applying distributed source coding where each encoder compresses its source conditioned on the side information at the receiver, and then applying an optimal multiple access channel code with independent codebooks. We also give necessary and sufficient conditions for source and channel separability in the above problem if there is a perfect two-sided feedback from the receiver to the transmitters. These two communication scenarios constitute examples of few non-trivial multi-user scenarios for which separation holds View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Functional Compression through Graph Coloring

    Page(s): 93 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (206 KB) |  | HTML iconHTML  

    We consider the distributed computation of a function of random sources with minimal communication. Specifically, given two discrete memoryless sources, X and Y, a receiver wishes to compute f(X, Y) based on (encoded) information sent from X and Y in a distributed manner. A special case, f(X, Y) = (X, Y), is the classical question of distributed source coding considered by Slepian and Wolf (1973). Orlitsky and Roche (2001) considered a somewhat restricted setup when Y is available as side information at the receiver. They characterized the minimal rate at which X needs to transmit data to the receiver as the conditional graph entropy of the characteristic graph of X based on f. In our recent work (2006), we further established that this minimal rate can be achieved by means of graph coloring and distributed source coding (e.g. Slepian-Wolf coding). This characterization allows for the separation between "function coding" and "correlation coding." In this paper, we consider a more general setup where X and Y are both encoded (separately). This is a significantly harder setup for which to give a single-letter characterization for the complete rate region. We find that under a certain condition on the support set of X and Y (called the zigzag condition), it is possible to characterize the rate region based on graph colorings at X and Y separately. That is, any achievable pair of rates can be realized by means of first coloring graphs at X and Y separately (function coding) and then using Slepian-Wolf coding for these colors (correlation coding). We also obtain a single-letter characterization of the minimal joint rate. Finally, we provide simulation results based on graph coloring to establish the rate gains on real sequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Differential Compression of Executable Code

    Page(s): 103 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (153 KB) |  | HTML iconHTML  

    A platform-independent algorithm to compress file differences is presented here. Since most file updates consist of software updates and security patches, particular attention is dedicated to making this algorithm suitable to efficient compression of differences between executable files. This algorithm is designed so that its low-complexity decoder can be used in mobile and embedded devices. Compression is compared with several existing methods on a common test suite View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compressed Delta Encoding for LZSS Encoded Files

    Page(s): 113 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (183 KB) |  | HTML iconHTML  

    We explore the full compressed delta encoding problem in compressed texts, defined as the problem of constructing a delta file directly from the two given compressed files, without decompressing. We concentrate on the case where the given files are compressed using LZSS and propose solutions for the special cases involving substitutions only View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simple Linear-Time Off-Line Text Compression by Longest-First Substitution

    Page(s): 123 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5721 KB) |  | HTML iconHTML  

    We consider grammar based text compression with longest-first substitution, where non-overlapping occurrences of a longest repeating substring of the input text are replaced by a new non-terminal symbol. We present a new text compression algorithm by simplifying the algorithm presented in S. Inenaga et al., (2003). We give a new formulation of the correctness proof introducing the sparse lazy suffix tree data structure. We also present another type of longest-first substitution strategy that allows better compression. We show results of preliminary experiments comparing grammar sizes of the two versions of the longest-first strategy and the most frequent strategy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on Redundancy in Constrained Delay Arithmetic Coding

    Page(s): 133 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (331 KB) |  | HTML iconHTML  

    We address the problem of a finite delay constraint in an arithmetic coding system. Due to the nature of the arithmetic coding process, source sequences causing arbitrarily large encoding or decoding delays exist. Therefore, to meet a finite delay constraint, it is necessary to intervene with the normal flow of the coding process, e.g., to insert fictitious symbols. This results in an inevitable coding rate redundancy. In this paper, we derive an upper bound on the achievable redundancy for a memoryless source. We show that this redundancy decays exponentially as a function of the delay constraint, and thus it is clearly superior to block to variable methods in that aspect. The redundancy-delay exponent is shown to be lower bounded by log(1/alpha), where alpha is the probability of the most likely source symbol. Our results are easily applied to practical problems such as the compression of English text View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Grayscale Stereo Image Coding with Unsupervised Learning of Disparity

    Page(s): 143 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (249 KB) |  | HTML iconHTML  

    Distributed compression is particularly attractive for stereo images since it avoids communication between cameras. Since compression performance depends on exploiting the redundancy between images, knowing the disparity is important at the decoder. Unfortunately, distributed encoders cannot calculate this disparity and communicate it. We consider the compression of grayscale stereo images, and develop an expectation maximization algorithm to perform unsupervised learning of disparity during the decoding procedure. Towards this, we devise a novel method for joint bitplane distributed source coding of grayscale images. Our experiments with both natural and synthetic 8-bit images show that the unsupervised disparity learning algorithm outperforms a system which does no disparity compensation by between 1 and more than 3 bits/pixel and performs nearly as well as a system which knows the disparity through an oracle View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Edge-Based Prediction for Lossless Compression of Hyperspectral Images

    Page(s): 153 - 162
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    We present two algorithms for error prediction in lossless compression of hyperspectral images. The algorithms are context-based and non-linear, and use a one-band look-ahead, thus requiring a minimal storage buffer. The first algorithm (NPHI) predicts the pixel in the current band based on the information from its context. Prediction contexts are defined based on the neighboring causal pixels in the current band and the corresponding co-located causal pixels in the reference band. EPHI extends NPHI using edge-based analysis. Prediction is performed by classifying the pixels into edge and non-edge pixels. Each pixel is then predicted using information from pixels in the same edge class within the context. Empirical results show that the proposed methods produce competitive results when compared with other state-of-the-art algorithms with comparable complexity. On average, the edge-based technique (EPHI) produced the best overall result, over the images in the test dataset View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral Predictors

    Page(s): 163 - 172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1954 KB) |  | HTML iconHTML  

    Many scientific, imaging, and geospatial applications produce large high-precision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show that predictive coding using our spectral predictor improves compression for various sources of high-precision data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Compression of Encrypted Video

    Page(s): 173 - 182
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    We consider video sequences that have been encrypted uncompressed. Since encryption masks the source, traditional data compression algorithms are rendered ineffective. However, it has been shown that through the use of distributed source-coding techniques, the compression of encrypted data is in fact possible. This means that it is possible to reduce data size without requiring that the data be compressed prior to encryption. Indeed, under some reasonable conditions, neither security nor compression efficiency need be sacrificed when compression is performed on the encrypted data (Johnson et al., 2004). In this paper we develop an algorithm for the practical lossless compression of encrypted gray scale video. Our method is based on considering the temporal correlations in the video. This move to temporal dependence builds on our previous work on memoryless sources, and one- and two-dimensional Markov sources. For comparison, a motion-compensated lossless video encoder can compress each unencrypted frame of the standard "Foreman" test video sequence by about 57%. Our algorithm can compress the same frames, after encryption, by about 33% View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Parallel Decoder for Lossless Image Compression by Block Matching

    Page(s): 183 - 192
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (181 KB) |  | HTML iconHTML  

    A work-optimal O(lognlogM) time PRAM-EREW algorithm for lossless image compression by block matching was shown in L. Cinque et al., (2003), where n is the size of the image and M is the maximum size of the match. The design of a parallel decoder was left as an open problem. By slightly modifying the parallel encoder, in this paper we show how to implement the decoder in O(lognlogM) time with O(n/logn) processors on the PRAM-EREW. With the realistic assumption that the size of the compressed image is O(n1/2), the parallel decoder requires O(log2n) time and O(n/logn) processors on the mesh of trees View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.