A new framework for content-based image retrieval, which takes advantage of the source characterization property of a universal source coding scheme, is investigated. Based upon a new class of multidimensional incremental parsing algorithm, extended from the Lempel-Ziv incremental parsing code, the proposed method captures the occurrence pattern of visual elements from a given image. A linguistic processing technique, namely the latent semantic analysis (LSA) method, is then employed to identify associative ensembles of visual elements, which lay the foundation for intelligent visual information analysis. In 2-D applications, incremental parsing decomposes an image into elementary patches that are different from the conventional fixed square-block type patches. When used in compressive representations, it is amenable in schemes that do not rely on average distortion criteria, a methodology that is a departure from the conventional vector quantization. We call this methodology a parsed representation. In this article, we present our implementations of an image retrieval system, called IPSILON, with parsed representations induced by different perceptual distortion thresholds. We evaluate the effectiveness of the use of the parsed representations by comparing their performance with that of four image retrieval systems, one using the conventional vector quantization for visual information analysis under the same LSA paradigm, another using a method called SIMPLIcity which is based upon an image segmentation and integrated region matching, and the other two based upon query-by-semantic-example and query-by-visual-example. The first two of them were tested with 20 000 images of natural scenes, and the others were tested with a portion of the images. The experimental results show that the proposed parsed representation efficiently captures the salient features in visual images and the IPSILON systems outperform other systems in terms of retrieval precision and distortion r- - obustness.