Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Data Compression Conference, 1998. DCC '98. Proceedings

Date March 30 1998-April 1 1998

Filter Results

Displaying Results 1 - 25 of 118
  • Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)

    Publication Year: 1998
    Save to Project icon | Request Permissions | PDF file iconPDF (440 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 1998 , Page(s): 587 - 589
    Save to Project icon | Request Permissions | PDF file iconPDF (107 KB)  
    Freely Available from IEEE
  • A perceptual preprocessor to segment video for motion estimation

    Publication Year: 1998
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (40 KB)  

    Summary form only given. The objective of motion estimation and motion compensation is to reduce the temporal redundancy between adjacent pictures in a video sequence. Motion estimation is usually performed by calculating an error metric, such as mean absolute error (MAE), for each block in the current frame over a displaced region in the previously reconstructed frame. The motion vector is attained as the displacement having the minimum error metric. Although this achieves minimum-MAE in the residual block, it does not necessarily result in the best perceptual quality since the MAE is not always a good indicator of video quality. In low bit rate video coding, the overhead in sending the motion vectors becomes a significant proportion of the total data rate. The minimum-MAE motion vector may not achieve the minimum joint entropy for coding the residual block and motion vector, and thus may not achieve the best compression efficiency. In this paper, we attack these problems by introducing a perceptual preprocessor which takes advantage of the insensitivity of the human visual system (HVS) to mild changes in pixel intensity in order to segment the video into regions according to the perceptibility of the picture changes. Our preprocessor can exploit the local psycho-perceptual properties of the HVS because it is designed to segment video in the spatio-temporal pixel domain. The associated computational complexity for the segmentation in the spatio-temporal pixel domain is very small. With the information of segmentation, we then determine which macroblocks require motion estimation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A lossless 2-D image compression technique for synthetic discrete-tone images

    Publication Year: 1998 , Page(s): 359 - 368
    Cited by:  Papers (5)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    A new image compression technique, flexible automatic block decomposition (FABD), losslessly compresses typical discrete-tone pseudo-color images 1.5 to 5.5 times more compactly than GIF, and up to twice as compactly as JBIG. The algorithm is designed to exploit the two-dimensional redundancy in an image by expressing the image in terms of itself. Several optimizations allow the algorithm to complete in a matter of seconds on a 100 MIPS processor. Decompression is fast and simple, as is required in a Web browsing environment. Entropy coding techniques result in a coding rate of typically 0.03 bpp-0.20 bpp View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On accelerating fractal compression

    Publication Year: 1998
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    Summary form only given. Image data compression by fractal techniques has been widely investigated. Although its high compression ratio and resolution-free decoding properties are attractive, the encoding process is computationally demanding in order to achieve an optimal compression. This article proposes a fast fractal-based encoding algorithm (ACC) by using the intensity changes of neighboring pixels to search for a suboptimal domain block for a given range block. Experimental results show that our algorithm achieves close to the optimal algorithm (OPT) for 256×256 images Jet, Lenna, Mandrill, and Peppers, with a compression ratio of 16. A comparison of the performance of algorithms OPT and ACC on a Sun Ultra 1 Sparc workstation is given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The prevention of error propagation in dictionary compression with update and deletion

    Publication Year: 1998 , Page(s): 199 - 208
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB)  

    In earlier work we presented the k-error protocol, a technique for protecting a dynamic dictionary lossless compression method from error propagation as the result of errors on the communication channel or compressed file. Experiments showed that in practice this approach is both fast and highly effective against a noisy channel or faulty storage medium. This past work addressed dictionary-based methods where new strings are added over time. Here we address the issue of dynamically deleting strings. Although without modification most standard methods used in practice (e.g., LRU strategies) perform poorly with respect to error propagation, we propose and analyze some that are very robust, including a strategy based on leaf pruning View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient lossless coding of medical image volumes using reversible integer wavelet transforms

    Publication Year: 1998 , Page(s): 428 - 437
    Cited by:  Papers (9)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB)  

    A novel lossless medical image compression algorithm based on three-dimensional integer wavelet transforms and zerotree coding is presented. The EZW algorithm is extended to three dimensions and context-based adaptive arithmetic coding is used to improve its performance. The algorithm (3-D CB-EZW) efficiently encodes image volumes by exploiting the dependencies in all three dimensions, while enabling lossy and lossless compression from the same bitstream. Results on lossless compression of CT and MR images are presented, and compared to other lossless compression algorithms. The progressive performance of the 3-D CB-EZW algorithm is also compared to other lossy progressive coding algorithms. For representative images, the 3-D CB-EZW algorithm produced an average of 14% and 20% decrease in compressed file sizes for CT and MR images, respectively, compared to the best available 2-D lossless compression techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A joint source-channel coding scheme for robust image transmission

    Publication Year: 1998
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    Summary form only given. We propose a joint source-channel coding scheme for the transmission of images over noisy channels. In this scheme, the interaction between source coding and channel coding is through a small number of parameters. A robust source coder based on the classification of the discrete wavelet transform (DWT) coefficients of the image is coupled with a flexible error control scheme which uses a bank of channel codes. Robustness in the source coder is achieved by scalar quantization of the coefficients followed by robust arithmetic coding to check catastrophic propagation of errors. Robust arithmetic coding generates a sequence of equal-length packets of bits along with a small number of sensitive bits for synchronization. A bank of rate compatible punctured convolutional codes concatenated with an outer error detection code like CRC is used for providing unequal error protection. Error detection is useful for error masking while decoding. The sensitive bits are protected and transmitted separately. An iterative algorithm is developed for selection of source coding rates and channel codes for the sequences of DWT coefficients. The algorithm uses the probabilities of packet decoding errors for the different channel codes and the operational rate distortion performance of the sequences of coefficients for the joint allocation. The system has a high reliability as the sensitive information is kept small. Some simulation results are provided for the transmission of the 512×512 image Lenna over binary symmetric channels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Musical image compression

    Publication Year: 1998 , Page(s): 209 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    Optical music recognition aims to convert the vast repositories of sheet music in the world into an on-line digital format. In the near future it will be possible to assimilate music into digital libraries and users will be able to perform searches based on a sung melody in addition to typical text-based searching. An important requirement for such a system is the ability to reproduce the original score as accurately as possible. Due to the huge amount of sheet music available, the efficient storage of musical images is an important topic of study. This paper investigates whether the “knowledge” extracted from the optical music recognition (OMR) process can be exploited to gain higher compression than the JBIG international standard for bi-level image compression. We present a hybrid approach where the primitive shapes of music extracted by the optical music recognition process-note heads, note stems, staff lines and so forth-are fed into a graphical symbol based compression scheme originally designed for images containing mainly printed text. Using this hybrid approach the average compression rate for a single page is improved by 3.5% over JBIG. When multiple pages with similar typography are processed in sequence, the file size is decreased by 4-8%. The relevant background to both optical music recognition and textual image compression is presented. Experiments performed on 66 test images are described, outlining the combinations of parameters that were examined to give the best results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Color image compression by stack-run-end coding

    Publication Year: 1998
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (20 KB)  

    Summary form only given. We present a new wavelet based image coding algorithm for color image compression. The key renovation of this algorithm is based on a new context oriented information conversion for data compression. A small number of symbol sets were then designed to convert the information from the wavelet transform domain into a compact data structure for each subband. Unlike zerotree coding or its variations which utilize the intersubband relationship into its own data representation where hierarchical or parents-children dependency is performed, our work is a low complexity intrasubband based coding method which only addresses the information within the subband or combines the information across the subbands. The scheme works first by color space conversion, followed by uniform scalar quantization. A concise data structure which categorizes the quantized coefficients into (stack, run, end) data format is performed, where the raster scanning order for individual subband is the most common used method but predefined scanning order will also work. Compared with the standard stack-run coding, our method generalized the symbol representation and the extension of the symbol alphabets. The termination symbols which carry the zero value information towards the end of the subband or across the subbands till the end of the image help to speed up the decoding processes. Our experiment results show that our approach is very competitive to the refinement of zerotree type schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint source-channel coding using space-filling curves for bandwidth compression

    Publication Year: 1998
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    Summary form only given. By jointly optimizing source and channel codes, we can generally get less overall average distortion and more robustness to channel impairments than with separately designed source and channel codes. In many cases, however, it is difficult or sometimes impossible for a single coding scheme to achieve the least possible distortion for a large range of channel variations except the trivial Gaussian case, where the rate distortion bound on the mean square error is achieved for all values of the channel signal to noise ratio (SNR) when the i.i.d. Gaussian source and the additive white Gaussian noise (AWGN) channel have the same bandwidth. In this paper, we show that a simple uncoded system that transmits an unmodified uniform i.i.d. source defined on the unit interval I=[0,1] through a modulo AWGN channel (an AWGN channel followed by a mod-1 mapping such that the channel output is the fractional part of the AWGN channel output) can achieve the rate distortion bound almost optimally for all values of the channel SNR View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intensity controlled motion compensation

    Publication Year: 1998 , Page(s): 249 - 258
    Cited by:  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    A new motion compensation technique that allows more than one motion vector inside each block is introduced. The technique uses the intensity information to determine which motion vector to apply at any given pixel. An efficient motion estimation algorithm is described that finds near optimal selections of motion vectors. The simulation results show a significant improvement in the prediction accuracy over the traditional one motion vector per block model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The multiple description rate region for high resolution source coding

    Publication Year: 1998 , Page(s): 149 - 158
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    Consider encoding a memoryless source using two descriptions, the first at rate R1 and distortion d1, the second at rate R2 and distortion d2. Combining the two descriptions the source can be reconstructed with distortion d0 . For a Gaussian source of variance σ2, Ozarow (1980) found an explicit characterization of the region R*(σ2 ; d1,d2,d0)⊂R2 of achievable rate pairs (R1, R2) with given mean squared distortions d1, d2, and d0 . This is the only case for which the multiple description rate-distortion region is completely known. We show that for a general real valued source X and a locally quadratic distortion measure of the form ρ(x,xˆ)=w(x)2(x-xˆ)2+o((x-xˆ) 2), the region of admissible rate pairs is arbitrary well approximated in the limit of small distortions by the region R*(PX 22E{log m(X)}; d1,d2,d0) where R*(σ2; d1,2, d0) denotes the multiple description rate region of a Gaussian source with variance σ2 , and where PX is the entropy-power of the source. Applications to companding quantization are also considered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A locally optimal design algorithm for block-based multi-hypothesis motion-compensated prediction

    Publication Year: 1998 , Page(s): 239 - 248
    Cited by:  Papers (22)  |  Patents (44)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    Multi-hypothesis motion-compensated prediction extends traditional motion-compensated prediction used in video coding schemes. Known algorithms for block-based multi-hypothesis motion-compensated prediction are, for example, overlapped block motion compensation (OBMC) and bidirectionally predicted frames (B-frames). This paper presents a generalization of these algorithms in a rate-distortion framework. All blocks which are available for prediction are called hypotheses. Further, we explicitly distinguish between the search space and the superposition of hypotheses. Hypotheses are selected from a search space and their spatio-temporal positions are transmitted by means of spatio-temporal displacement codewords. Constant predictor coefficients are used to combine linearly hypotheses of a multi-hypothesis. The presented design algorithm provides an estimation criterion for optimal multi-hypotheses, a rule for optimal displacement codes, and a condition for optimal predictor coefficients. Statistically dependent hypotheses of a multi-hypothesis are determined by an iterative algorithm. Experimental results show that Increasing the number of hypotheses from 1 to 8 provides prediction gains up to 3 dB in prediction error View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correcting English text using PPM models

    Publication Year: 1998 , Page(s): 289 - 298
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    An essential component of many applications in natural language processing is a language modeler able to correct errors in the text being processed. For optical character recognition (OCR), poor scanning quality or extraneous pixels in the image may cause one or more characters to be mis-recognized, while for spelling correction, two characters may be transposed, or a character may be inadvertently inserted or missed out, This paper describes a method for correcting English text using a PPM model. A method that segments words in English text is introduced and is shown to be a significant improvement over previously used methods. A similar technique is also applied as a post-processing stage after pages have been recognized by a state-of-the-art commercial OCR system. We show that the accuracy of the OCR system can be increased from 96.3% to 96.9%, a decrease of about 14 errors per page View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compression via guided parsing

    Publication Year: 1998
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28 KB)  

    Summary form only given. The reduction in storage size achieved by compressing a file translates directly into a reduction in transmission time when communicating the file. An increasingly common form of transmitted data is a computer program description. This paper examines the compression of source code, the high-level language representation of a program, using the language's context free grammar. We call the general technique guided parsing since it is a compression scheme based on predicting the behavior of a parser when it parses the source code and guiding its behavior by encoding its next action based on this prediction. In this paper, we describe the implementation and results of two very different forms of guided parsing. One is based on bottom-up parsing while the other is a top-down approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint source channel matching for a wireless communications link

    Publication Year: 1998
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    Summary form only given. With the rapid growth of wireless communications systems there is an increasing demand for efficient image and video transmission. Significant performance gains can be obtained from joint source channel matching where system resources are assigned based on the tradeoff between data and redundancy. We develop a more general approach for joint source channel matching based on a parametric distortion model that incorporates the flexibility and constraints of both the source and the channel. We use parametric models describing source and channel. We use parametric models describing source and channel characteristics that can be accurately applied to most classes of source and channel coders. To show the generality of our approach, we applied it to the familiar Said-Pearlman progressive image coder with two types of channel coders, a coder with orthogonal symbols of different power, and fixed-rate BPSK modulation overlaid with Reed Solomon codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adjustments for JPEG de-quantization coefficients

    Publication Year: 1998
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (80 KB)  

    Summary form only given. In JPEG baseline compression algorithm, the quantization loss to DCT coefficients can be reduced, if we make use of the observation that the distributions of the DCT coefficients peak at zero and decrease exponentially. It means that the mid-point of a quantization interval, say m, used by the JPEG decoder to restore all coefficients falling within the interval, may be replaced by another point, say y, closer to zero but within the interval. If we model the distributions by λe-λ|x|, where λ>0 is a constant, derivable from some statistical parameters such as mean or variance, and we assume that the adjustment q=|m-y| should be chosen so that the sum of the loss to all coefficients falling within a quantization interval is zero for each interval, we can derive q=Q(eλ(Q-1)+(Q-2)/2)/2eλ(Q-1) -1)-1/λ where Q is the quantizer step size. To test usefulness of the above idea, we implemented both approaches: (1) JPEG encoder computes λ for each DCT distribution and passes it as part of coded data to the decoder, and (2) JPEG decoder computes λ from the quantized DCT coefficient incrementally as it decodes its input. Through experiments, we found that none of these approaches resulted in much improvements, but found a better approach (OUR) which does not require any modeling of DCT. It uses Σ(|m-y|*C)/ΣC to compute adjustments, where C is the number of coefficients falling within an interval, and the Σ is taken over all intervals not-containing the zero DCT. We also implemented the formulation developed by Ahumada et. al (see SID Digest, 1994) to compare it with the results of OUR approach. The comparison is shown in terms of the % reduction in the RMSE of the images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive fractal image coding: hybrid algorithms and compression of residuals

    Publication Year: 1998
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    Summary form only given. The authors introduce hybrid algorithms which consist of a fractal predictor in the spatial domain with subsequent coding of the residual image (error-image between the fractal prediction and the image to compress). For coding the residual either wavelet (based on the SPIHT coder) or DCT based coding (as used in interframe compression, e.g. for B or P frames in H.261, MPEG-1,2) is employed. Additionally they contribute to the discussion about the performance of wavelet and DCT based algorithms for the compression of motion compensated error frames in interframe video coding algorithms since the residual images considered in the proposed hybrid algorithms exhibit similar (or even identical) statistical properties as motion compensated error frames View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video coding using vector zerotrees and adaptive vector quantization

    Publication Year: 1998
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    Summary form only given. We present a new algorithm for intraframe coding of video which combines zerotrees of vectors of wavelet coefficients and the generalized-threshold-replenishment (GTR) technique for adaptive vector quantization (AVQ). A data structure, the vector zerotree (VZT), is introduced to identify trees of insignificant vectors, i.e., those vectors of wavelet coefficients in a dyadic subband decomposition that are to be coded as zero. GTR coders are then applied to each subband to efficiently code the significant vectors by way of adapting to their changing statistics. Both VZT generation anti GTR coding are based upon minimization of criteria involving both rate and distortion. In addition, perceptual performance is improved by invoking simple, perceptually motivated weighting in both the VZT and the GTR coders View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The H.263+ video coding standard: complexity and performance

    Publication Year: 1998 , Page(s): 259 - 268
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (96 KB)  

    The emerging ITU-T H.263+ low bit-rate video coding standard is version 2 of the draft international standard ITU-T H.263. In this paper, we discuss this emerging video coding standard and present compression performance results based on our public domain implementation of H.263+ View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum message length hidden Markov modelling

    Publication Year: 1998 , Page(s): 169 - 178
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    This paper describes a minimum message length (MML) approach to finding the most appropriate hidden Markov model (HMM) to describe a given sequence of observations. An MML estimate for the expected length of a two-part message stating a specific HMM and the observations given this model is presented along with an effective search strategy for finding the best number of states for the model. The information estimate enables two models with different numbers of states to be fairly compared which is necessary if the search of this complex model space is to avoid the worst locally optimal solutions. The general purpose MML classifier `Snob' has been extended and the new program `tSnob' is tested on `synthetic' data and a large `real world' dataset. The MML measure is found to be an improvement on the Bayesian information criteria (BIG) and the un-supervised search strategy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal data compression and linear prediction

    Publication Year: 1998 , Page(s): 511 - 520
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    The relationship between prediction and data compression can be extended to universal prediction schemes and universal data compression. Previous work shows that minimizing the sequential squared prediction error for individual sequences can be achieved using the same strategies which minimize the sequential code length for data compression of individual sequences. Defining a “probability” as an exponential function of sequential loss, results from universal data compression can be used to develop universal linear prediction algorithms. Specifically, we present an algorithm for linear prediction of individual sequences which is twice-universal, over parameters and model orders View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mail servers with embedded data compression mechanisms

    Publication Year: 1998
    Cited by:  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8 KB)  

    Summary form only given. Typically, e-mail messages are moved across the Internet using the Simple Mail Transfer protocol (SMTP) which utilizes the connection-oriented Transmission Control Protocol (TCP) to establish connections between two mail servers. The POP3 (Post Office Protocol) is used to retrieve the mail for individual users from a server. We designed and implemented e-mail servers that contain embedded data compression mechanisms; the SMTP protocol is extended to allow for the mail client and server to negotiate compression which is transparent to the users and the new servers are backward-compatible with traditional mail servers. The LZSS compression algorithm is used to carry out the data compression. Different kinds of mail data were used to test the e-mail system. Textural data, binary data, and graphical data were transported across the Internet using the designed e-mail system. Several Windows NT hosts were identified for this experiment. These hosts were connected with the Internet View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • AudioPaK-an integer arithmetic lossless audio codec

    Publication Year: 1998
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    We designed a simple, lossless audio codec, called AudioPaK, which uses only a small number of integer arithmetic operations on both the coder and the decoder side. The main operations of this codec are polynomial prediction and Golomb-Rice coding, and are done on a frame basis. Our coder performs as well, or even better than most lossless audio codecs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.