By Topic

Scene classification in compressed and constrained domain

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Farinella, G.M. ; Dipt. di Mat. e Inf., Univ. di Catania, Catania, Italy ; Battiato, S.

Holistic representations of natural scenes are an effective and powerful source of information for semantic classification and analysis of images. Despite the technological hardware and software advances, consumer single-sensor imaging devices technology are quite far from the ability of recognising scenes and/or to exploit the visual content during (or after) acquisition time. The frequency domain has been successfully exploited to holistically encode the content of natural scenes in order to obtain a robust representation for scene classification. The authors exploit a holistic representation of the scene in the discrete cosine transform domain fully compatible with the JPEG format. The advised representation is coupled with a logistic classifier to perform classification of the scene at superordinate level of description (e.g. natural against artificial), or to discriminate between multiple classes of scenes usually acquired by a consumer imaging device (e.g. portrait, landscape and document). The proposed method is able to work in constrained domain. Experiments confirm the effectiveness of the proposed method. The obtained results closely match state-of-the-art methods in terms of accuracy outperforming in terms of computational resources.

Published in:

Computer Vision, IET  (Volume:5 ,  Issue: 5 )