By Topic

Using Co-Occurrence and Segmentation to Learn Feature-Based Object Models from Video

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Stepleton, T. ; Robot. Inst., Carnegie Mellon Univ., Pittsburgh, PA ; Tai Sing Lee

A number of recent systems for unsupervised feature- based learning of object models take advantage of cooccurrence: broadly, they search for clusters of discriminative features that tend to coincide across multiple still images or video frames. An intuition behind these efforts is that regularly co-occurring image features are likely to refer to physical traits of the same object, while features that do not often co-occur are more likely to belong to different objects. In this paper we discuss a refinement to these techniques in which multiple segmentations establish meaningful contexts for co-occurrence, or limit the spatial regions in which two features are deemed to co-occur. This approach can reduce the variety of image data necessary for model learning and simplify the incorporation of less discriminative features into the model.

Published in:

Application of Computer Vision, 2005. WACV/MOTIONS '05 Volume 1. Seventh IEEE Workshops on  (Volume:1 )

Date of Conference:

5-7 Jan. 2005