Skip to Main Content
A model-based method for transformation-invariant area descriptor extraction is proposed in this paper in the context of object recognition and image matching. Local image descriptors are extracted in salient circular fragments of variable size, which indicate image locations with high intensity contrast, regional homogeneity and shape saliency. Three different types of descriptors - pose, intensity, and area shape - are extracted to form a single descriptor vector. The pose and intensity descriptors are made relational and normalized in order to achieve invariance to image similarity transformations and affine intensity changes. The method requires no image segmentation since the feature points are time-efficiently extracted in a multi-scale manner by analyzing circular areas of various sizes.