Skip to Main Content
An adaptive predictor with automatic context modeling (APACM) is proposed for lossless image coding in this paper. The main prediction stage of APACM is a three-layered back-propagation neural network. Due to the nonstationary property of real images, a fixed predictor is not adequate to deal with the varying statistics of input images. Using causal neighbors of the coding pixel as training patterns, network weights of APACM are adapted on the fly. For error compensation mechanism, context modeling is made automatic using vector quantization based on a modified UFCL (Unsupervised Fuzzy Competitive Learning). Refined prediction errors are then entropy encoded using conditional arithmetic coding to produce the code stream. Experiments show that proposed APACM can remove redundancy between image pixels efficiently. Comparisons of the proposed system to existing state-of-the-art predictors will be given to demonstrate its usefulness.