Skip to Main Content
This paper presents lossy compression algorithms which build on a state-of-the-art codec, the Set Partitioned Embedded Block Coder (SPECK), by incorporating a lattice vector quantizer codebook, therefore allowing it to process multiple samples at one time. In our tests, we employ scenes derived from standard AVIRIS hyperspectral images, which possess 224 spectral bands. The first proposed method, LVQ-SPECK, uses a lattice vector quantizer-based codebook in the spectral direction to encode a number of consecutive bands that is equal to the codeword dimension. It is shown that the choice of orientation codebook used in the encoding greatly influences the performance results. In fact, even though the method does not make use of a 3-D discrete wavelet transform, in some cases it produces results that are comparable to those of other state-of-the-art 3-D codecs. The second proposed algorithm, DWP-SPECK, incorporates the 1-D discrete wavelet transform in the spectral direction, producing a discrete wavelet packet decomposition, and simultaneously encodes a larger number of spectral bands. This method yields performance results that are comparable or superior to those attained by other 3-D wavelet coding algorithms such as 3D-SPECK and JPEG2000 (in its multi-component version). We also look into a novel method for reducing the number of codewords used during the refinement pass in the proposed methods which, for most codebooks, provides a reduction in rate while following the same encoding path of the original methods, thereby improving their performance. We show that it is possible to separate the original codebook used into two distinct classes, and use a flag when sending refinement information to indicate to which class this information belongs. In summary, given the results obtained by our proposed methods, we show that they constitute a viable option for the compression of volumetric datasets with large amounts of data.