Contrary to the conventional paradigm of transform decomposition followed by quantization, we investigate the computation of two-dimensional (2D) discrete wavelet transforms (DWT) under quantized representations of the input source. The proposed method builds upon previous research on approximate signal processing and revisits the concept of incremental refinement of computation: Under a refinement of the source description (with the use of an embedded quantizer), the computation of the forward and inverse transform refines the previously computed result, thereby leading to incremental computation of the output. In the first part of this paper, we study both the forward and inverse DWT under state-of-the-art 2D lifting-based formulations. By focusing on conventional bitplane-based (double-deadzone) embedded quantization, we propose schemes that achieve incremental refinement of computation for the multilevel DWT decomposition or reconstruction based on a bitplane-by-bitplane calculation approach. In the second part, based on stochastic modeling of typical 2D DWT coefficients, we derive an analytical model to estimate the arithmetic complexity of the proposed incremental refinement of computation. The model is parameterized with respect to ( i) operational settings, such as the total number of decomposition levels and the terminating bitplane; (ii) input source and algorithm-related settings, e.g., the source variance, the complexity related to the choice of wavelet, etc. Based on the derived formulations, we study for which subsets of these model parameters the proposed framework derives identical reconstruction accuracy to the conventional approach without any incurring computational overhead. This is termed successive refinement of computation, since all representation accuracies are produced incrementally under a single (continuous) computation of the refined input source with no overhead in comparison to the conventional calculation approach that specifically ta- rgets each accuracy level and is not refinable. Our results, as well as the derived model estimates for incremental refinement, are validated with real video sequences compressed with a scalable coder.