Skip to Main Content
In order to increase the computational efficiency of compression methods we have to consider that new hardware architectures increasingly rely more on wider data paths and parallel processing (e.g., SIMD and multi-core), than on faster clocks. Higher data throughputs are achieved with entropy coding methods that process larger amounts of information each time, and use context dependencies that are less complicated and that can be quickly updated. We propose a coding method with properties more suited to the new processors, that achieves better compression by exploiting patterns of data magnitude. We present experimental results on image coding implementations that take advantage of the fast decay of transform coefficient variance with frequency.