Human observers understand the content of an image intuitively. Based upon image content, they perform many image-related tasks, such as creating slide shows and photo albums, and organizing their image archives. For example, to select photos for an album, people assess image quality based upon the main objects in the image. They modify colors in an image based upon the color of important objects, such as sky, grass or skin. Serious photographers might modify each object separately. Photo applications, in contrast, use low-level descriptors to guide similar tasks. Typical descriptors, such as color histograms, noise level, JPEG artifacts and overall sharpness, can guide an imaging application and safeguard against blunders. However, there is a gap between the outcome of such operations and the same task performed by a person. We believe that the gap can be bridged by automatically understanding the content of the image. This paper presents algorithms for automatic tagging of perceptual objects in images, including sky, skin, and foliage, which constitutes an important step toward this goal.