Skip to Main Content
This paper presents a cost-sensitive rank learning approach for visual saliency estimation. This approach avoids the explicit selection of positive and negative samples, which is often used by existing learning-based visual saliency estimation approaches. Instead, both the positive and unlabeled data are directly integrated into a rank learning framework in a cost-sensitive manner. Compared with existing approaches, the rank learning framework can take the influences of both the local visual attributes and the pair-wise contexts into account simultaneously. Experimental results show that our algorithm outperforms several state-of-the-art approaches remarkably in visual saliency estimation.