By Topic

Data Compression Conference, 1995. DCC '95. Proceedings

Date 28-30 March 1995

Filter Results

Displaying Results 1 - 25 of 117
  • Proceedings DCC '95 Data Compression Conference [table of contents]

    Publication Year: 1995
    Save to Project icon | Request Permissions | PDF file iconPDF (349 KB)  
    Freely Available from IEEE
  • Quantization distortion in block transform-compressed data

    Publication Year: 1995
    Cited by:  Papers (1)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (68 KB)  

    Summary form only given, as follows. The JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into blocks that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. Block transform compression schemes exhibit sharp discontinuities at data block boundaries: this phenomenon is a visible manifestation of the compression quantization distortion. For example, in compression algorithms such as JPEG these blocking effects manifest themselves visually as discontinuities between adjacent 8×8 pixel image blocks. In general the distortion characteristics of block transform-based compression techniques are understandable in terms of the properties of the transform basis functions and the transform coefficient quantization error. In particular, the blocking effects exhibited by JPEG are explained by two simple observations demonstrated in this work: a disproportionate fraction of the total quantization error accumulates on block edge pixels; and the quantization errors among pixels within a compression block are highly correlated, while the quantization errors between pixels in separate blocks are uncorrelated. A generic model of block transform compression quantization noise is introduced, applied to synthesized and real one and two dimensional data using the DCT as the transform basis, and results of the model are shown to predict distortion patterns observed in data compressed with JPEG. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient handling of large sets of tuples with sharing trees

    Publication Year: 1995
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (78 KB)  

    Summary form only given; substantially as follows. Computing with sets of tuples (n-ary relations) is often required in programming, while being a major cause of performance degradation as the size of sets increases. The authors present a new data structure dedicated to the manipulation of large sets of tuples, dubbed a sharing tree. The main idea to reduce memory consumption is to share some sub-tuples of the set represented by a sharing tree. Various conditions are given. The authors have developed algorithms for common set operations: membership, insertion, equality, union, intersection, ... that have theoretical complexities proportional to the sizes of the sharing trees given as arguments, which are usually much smaller than the sizes of the represented sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PPT based fast architecture & algorithm for discrete wavelet transforms

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (86 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithm evaluation for the synchronous data compression standards

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An investigation of effective compression ratios for the proposed synchronous data compression proto

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (66 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The development of a standard for compression of synchronous data in DSU/CSU's

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28 KB)  

    Summary form only given, as follows. Over the past year, a standard for compression of synchronous data in DSU/CSUs (56 kb/s) has been developed. The development began in an informal industry consortium known as the Synchronous Data Compression Consortium (SDCC) and the work later migrated to a committee of the Telecommunications Industry Association (TIA). This work chronicles the development of the standard, which is based on the Internet Standard Point-to-point Protocol, examining both the issues involved when applying data compression to communication links and the impact of choices made along the development path. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel image compression using vector quantization

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (65 KB)  

    Summary form only given. The authors used a parallel approach to address the complexity issues of vector quantization: They implemented two full search memoryless parallel vector quantizers using 2 x 2 and 4 x 4 fixed block sizes on a shared memory MIMD machine, the BBN GP 1000. The squared error distortion measure and the LBG codebook design algorithm were used. The searching of the codebook is done in parallel for both the image coding and codebook design phases. The input vectors are in shared memory distributed among all the processor node memory modules. A private copy of the codebook is given to each processor node. A parallel task is generated for each input vector to be encoded. Each task is assigned one input vector to encode. The task searches the entire codebook to determine the minimum distortion codevector. The index of this vector is the output of the task. Load balancing of the tasks on the available processor nodes is done automatically by the operating system. This algorithm design requires minimal synchronization between the tasks to accumulate the total distortion. While good parallel performance was achieved the vector quantizers were generally lacking in fidelity. It is expected that these methods can be extended to achiwe high fidelity nhile maintaining good parallel performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Making the Best of JPEG at Very High Compression Ratios: Rectangular Pixel Averaging for Mars PathFinder

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (57 KB)  

    As more and more NASA missions are turning to image compression to maximize their data return at constrained bit rates, and very often adopting JPEG as the centerpiece of their image compression system, they are noticing one limitation of JPEG: its poor performance at very high compression ratios (typically 32 and above). This is the case for engineering uses of image data on Mars Pathfinder such as assessment of lander condition and deployed airbags, and rover navigation. Unlike science scenes, images for engineering at low resolution are often sufficient. Unfortunately, at very high compression ratios, JPEG produces unacceptable artifacts, due to the limitation of the size of the Discrete Cosine Transform to 8, for which no clever quantization or entropy coding can compensate. Still, JPEG can successfully be used as part of a compression scheme if the encoding is preceeded by low-pass filtering and downsampling, while the decoding is followed by interpolation and upsampling, to restore the image to its original size. The choice for the horizontal and vertical downsampling/upsampling factors is made based on the known distance to the objective and its size, as well as on the fact that resolution in azimuth degrades more gradually than in elevation, leading to a larger downsampling factor in azimuth. Assuming unweighted pixel averaging is used as the low-pass filter before decimation, optimal interpolation filters which minimize the mean squared reconstruction error (MSRE) in the absence of JPEG are derived. In the presence of large JPEG-induced quantization noise however, bilinear interpolation filters are shown to outperform the optimal interpolation filters derived above. Engineering assessment of images compressed with this scheme at ratios up to 126-to-1 using bilinear interpolation confirm its performance and its success in extending the operational compression ratio range of JPEG. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Implementation of Data Compression in the Cassini RPWS Dedicated Compression Processor

    Publication Year: 1995
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (50 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancement of IMP lossy image data compression using LCT

    Publication Year: 1995
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (40 KB)  

    Summary form only given, as follows. This work addresses the lossy image data compression designed for the NASA Mars pathfinder (IMP) project. Both, mission profile and the availability of a RISC central board computer support a completely software oriented implementation of IMP lossy image data compression. One of the mission objectives is to demontfate the capability to place science payloads on the surface of Mars using a simple, reliable, low cost system within a demanding schedule. According to the cost and schedule objective a task oriented modification of the widely used Joint Photographic Expeqt Group (JPEG) standard for still image data compression was implemented as a starting version of compression software. This version has been delivered within a month after project kick-off. Subsequently extensions and improvements of performance were investigated and implemented as : (a) selection between pixel representation by 8 or 12 bit, 'item[(b)] implementation of arithmetic coding (improvement of performance), (c) implementation of "local cosine transform" (improvement of performance). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lossless compression using conditional entropy-constrained subband quantization

    Publication Year: 1995
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28 KB)  

    Summary form only given, as follows. The browse and residual compression strategy has been shown to be effective for data archival and telebrowsing of scientific databases. This paper introduces a hybrid lossless image compression technique that couples lossy subband quantization with lossless coding of the residual. Analysis is performed on the tradeoff between the rates expended on the browse and residual images. We investigate the effects of different distortion measures for compression of the browse image on the compression of the residue. The algorithm is shown to provide competitive lossless compression as well as the flexibility for progressive transmission for a moderate computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A massively parallel algorithm for vector quantization

    Publication Year: 1995
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (62 KB)  

    Summary form only given, as follows. This work is concerned with the parallel implementation of a vector quantizer system on Maspar MP-2, a single-instruction, multiple-data (SIMD) massively parallel computer. A vector quantizer (VQ) consists of two mappings: an encoder and a decoder. The encoder assigns to each input vector the index of the codevector that is closest to it. The decoder uses this index to reconstruct the signal. In our work, we used the Euclidean distortion measure to find the codevector closest to each input vector. The work described in this paper used a Maspar MP-2216 located at the Goddard Space Flight Center, Greenbelt, Maryland. This system has 16,384 processor elements (PEs) arranged in a rectangular array of 128 x 128 nodes. The parallel VQ algorithm is based on pipelining. The codevectors are distributed equally among the PEs in the first row of the PE array. These codevectors are then duplicated on the remaining processor rows. Traversing along any row of the PE array amounts to traversing through the entire codebook. After populating the PEs with the codevectors, the input vectors are presented to the first column of PEs. Each PE receives one vector at a time. The first set of data vectors are now compared with the group of codevectors in the first column. A data packet containing the the input vector, the minimum value of the distortion between the input vector and the code vectors it has encountered so far, and the index corresponding to the codevector that accounted for the current minimum value of the distortion is associated with each input vector. After updating the entries of the data packet, it is shifted one column to the right in the PE array. The next set of input vectors takes its place in the first column. The above process is repeated till all the input vectors are exhausted. The indices for the first set of data vectors are obtained after an appropriate number of shifts. The remaining indices are obtained in subsequent shi- ts. Results of extensive performance evaluations are presented in the full-length paper. These results suggest that our algorithm makes very efficient use of the parallel capabilities of the Maspar system. The existence of efficient algorithms such as the one presented in this paper should increase the usefulness and applicability of vector quantizers in Earth and Space science applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Publication Year: 1995
    Save to Project icon | Request Permissions | PDF file iconPDF (115 KB)  
    Freely Available from IEEE
  • Coding gain of intra/inter-frame subband systems

    Publication Year: 1995
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    Summary form only given. Typical image sequence coders use motion compensation techniques in connection to coding of the motion compensated difference images (interframe coding). Moreover, the difference loop is initialized from time to time by intraframe coding of images. It is therefore important to have a procedure that allows to evaluate the performance of a particular coding scheme: coding gain and rate-distortion figures are used in this work to this purpose. We present an explicit procedure to compute the coding gain for two-dimensional separable subband systems, both in the case of a uniform and a pyramid subband decomposition, and for the case of interframe coding. The technique operates in the signal domain and requires the knowledge of the autocorrelation function of the input process. In the case of a separable subband system and image spectrum, the coding gain can be computed by combining the results relative to appropriately defined one-dimensional filtering schemes, thus making the technique very attractive in terms of computational complexity. We consider both the case of a uniform subband decomposition and of a pyramid decomposition. The developed procedure is applied to compute the subband coding gain for motion compensated signals in the case of images modeled as separable Markov processes: different filter banks are compared to each other and to transform coding. In order to have indications on the effectiveness of motion compensation, we also compute the coding gain for intraframe images. We show that the results for the image models are in very good agreement with those obtained with real-world data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recursively indexed vector quantization of non-stationary sources

    Publication Year: 1995
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    Summary form only given. We present a recursively indexed vector quantizer with the following properties: (1) it is simple to implement with low computational overhead; (2) it is an adaptive algorithm and therefore is well suited for applications where the source is non-stationary; (3) the output rate can easily be changed making it suitable for applications requiring rate control, such as transmission over packet switched networks; and (4) the input vectors can be quantized to within a user specified distortion on a per vector basis rather than on average. We have called the algorithm forward adaptive even though this algorithm also use the past outputs for adaptation. We have tested this algorithm on a number of synthetic hidden Markov sources, and on a video sequence. The results of both tests compare favorably with existing results in the literature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video coding using 3 dimensional DCT and dynamic code selection

    Publication Year: 1995
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (52 KB)  

    Summary only given. We address the quality issue, and present a method for improved coding of the 3D DCT coefficients. Performance gain is achieved through the use of dynamically selected multiple coding algorithms. The resulting performance is excellent giving a compression ratio of greater than to 100:1 for image reproduction. The process consists of stacking 8 frames and breaking the data into 8×8×8 pixel cubes. The three dimensional DCT is applied to each cube. Each cube is then scanned in each dimension to determine if significant energy exists beyond the first two coefficients. Significance is determined with separate thresholds for each dimension. A single bit of side information is transmitted for each dimension of each cube to indicate whether more than two coefficients will be transmitted. The remaining coefficients of all cubes are reordered into a linear array such that the elements with the highest expected energies appear first and lower expected energies appear last. This tends to group coefficients with similar statistical properties for the most efficient coding. Eight different encoding methods are used to convert the coefficients into bits for transmission. The Viterbi algorithm is used to select the best coding method. The cost function is the number of bits that need to be sent. Each of the eight coding methods is optimized for a different range of values View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constraining the size of the instantaneous alphabet in trellis quantizers

    Publication Year: 1995 , Page(s): 23 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tree-structured vector quantization with significance map for wavelet image coding

    Publication Year: 1995 , Page(s): 33 - 41
    Cited by:  Papers (8)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB)  

    Variable-rate tree-structured VQ is applied to the coefficients obtained from an orthogonal wavelet decomposition. After encoding a vector, we examine the spatially corresponding vectors in the higher subbands to see whether or not they are “significant”, that is, above some threshold. One bit of side information is sent to the decoder to inform it of the result. When the higher bands are encoded, those vectors which were earlier marked as insignificant are not coded. An improved version of the algorithm makes the decision not to code vectors from the higher bands based on a distortion/rate tradeoff rather than a strict thresholding criterion. Results of this method on the test image “Lena” yielded a PSNR of 30.15 dB at 0.174 bits per pixel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matching pursuit video coding at very low bit rates

    Publication Year: 1995 , Page(s): 411 - 420
    Cited by:  Papers (8)  |  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB)  

    Matching pursuits refers to a greedy algorithm which matches structures in a signal to a large dictionary of functions. In this paper, we present a matching-pursuit based video coding system which codes motion residual images using a large dictionary of Gabor functions. One feature of our system is that bits are assigned progressively to the highest-energy areas in the motion residual image. The large dictionary size is another advantage, since it allows structures in the motion residual to be represented using few significant coefficients. Experimental results compare the performance of the matching-pursuit system to a hybrid-DCT system at various bit rates between 6 and 128 kbit/s. Additional experiments show how the matching pursuit system performs if the Gabor dictionary is replaced by an 8×8 DCT dictionary View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vector quantization for lossless textual data compression

    Publication Year: 1995
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    Summary form only given. Vector quantisation (VQ) may be adapted for lossless data compression if the data exhibit vector structures, such as in textural relational databases. Lossless VQ is discussed and it is demonstrated that a relation of tuples may be encoded and allocated to physical disk blocks such that standard database operations such as access, insertion, deletion, and update may be fully supported View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A high performance block compression algorithm for small systems-software and hardware implementations

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    Summary form only given. A new algorithmic approach to block data compression, using a highly contextual codification of the dictionary, that gives substantial compression-rate advantages over existing technologies, is described. The algorithm takes into account the limitations and characteristics of small systems, such as a low consumption of memory, high speed and short latency, as required by communication applications. It uses a novel construction of the prefix-free dictionary, a simple but powerful heuristic for filtering out the non-compressed symbols and a predictive dynamic prefix coding for the output entities. It also employs universal codification of the integers, allowing a very fast and direct implementation in silicon. A dynamic compression software package is detailed. Also, several techniques developed to maximize the usable disk-space and the software speed, among others, are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lattice-based designs of direct sum codebooks for vector quantization

    Publication Year: 1995
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    Summary form only given. A direct sum codebook (DSC) has the potential to reduce both memory and computational costs of vector quantization. A DSC consists of several sets or stages of vectors. An equivalent code vector is made from the direct sum of one vector from each stage. Such a structure, with p stages containing m vectors each, has mp equivalent code vectors, while requiring the storage of only mp vectors. DSC quantizers are not only memory efficient, they also have a naturally simple encoding algorithm, called a residual encoding. A residual encoding uses the nearest neighbor at each stage, requiring comparison with mp vectors rather than all mp possible combinations. Unfortunately, this encoding algorithm is suboptimal because of a problem called entanglement. Entanglement occurs when a different vector from that obtained by a residual encoding is actually a better fit for the input vector. An optimal encoding can be obtained by an exhaustive search, but this sacrifices the savings in computation. Lattice-based DSC quantizers are designed to be optimal under a residual encoding by avoiding entanglement Successive stages of the codebook produce finer and finer partitions of the space, resulting in equivalent code vectors which are points in a truncated lattice. After the initial design, the codebook can be optimized for a given source, increasing performance beyond that of a simple lattice vector quantizer. Experimental results show that DSC quantizers based on cubical lattices perform as well as exhaustive search quantizers on a scalar source View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel algorithms for the static dictionary compression

    Publication Year: 1995 , Page(s): 162 - 171
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    Studies parallel algorithms for two static dictionary compression strategies. One is the optimal dictionary compression with dictionaries that have the prefix property, for which our algorithm requires O(L+log n) time and O(n) processors, where L is the maximum allowable length of the dictionary entries, while previous results run in O(L+log n) time using O(n2) processors, or in O(L+log2n) time using O(n) processors. The other algorithm is the longest-fragment-first (LFF) dictionary compression, for which our algorithm requires O(L+log n) time and O(nL) processors, while the previous result has O(L log n) time performance on O(n/log n) processors. We also show that the sequential LFF dictionary compression can be computed online with a lookahead of length O(L2) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving LZFG data compression algorithm

    Publication Year: 1995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message `(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.