Skip to Main Content
Saliency information interpreted from the visual stimuli can predict the attentional behaviour of human perception, thus playing a key role in visual signal processing. In this letter, we present a hybrid saliency detection method for images by which we automatically predict the saliency regions based on low-level and high-level cues. Unlike existing bottom-up and top-down attentional methods, we consider the high-level cue imposed by the photographer. Based on this assumption, we estimate the defocus map of the image and integrate it with other low-level features based on the Bayesian framework. We compare our algorithm to several state-of-the-art saliency detection methods based on the well-known 1000 image EPFL database, and demonstrate the superior performance of our proposed algorithm.