Skip to Main Content
We propose a novel and robust detection of semantically equivalent but visually dissimilar object parts with the presence of geometric domain variations. The presented algorithms follow a part-based object learning and recognition framework proposed by Epshtein and Ullman. This approach characterizes the location of a visually dissimilar object (i.e., root fragment) as a function of its relative geometrical configuration to a set of local context patches (i.e., context fragments). This work extends the original detection algorithm for handling more realistic geometric domain variation by using robust candidate generation, exploiting geometric invariances of a pair of similar polygons, as well as SIFT-based context descriptors. An entropic feature selection is also integrated in order to improve its performance. Furthermore, robust voting in a maximum density framework is realized by variable bandwidth mean shift, allowing better root detection performance with the presence of significant errors in detecting corresponding context fragments. We evaluate the proposed solution for the task of detecting various facial parts using FERET database. Our experimental results demonstrate the advantage of our solution by indicating significant improvement of detection performance and robustness over the original system.