Skip to Main Content
Image local invariant features have been used in a wide range of applications, e.g., image retrieval, object categorization and robot localization. The matching of local feature points involves a succinct and discriminative descriptor for each point. However, current local descriptors use only neighborhood information, which typically suffer the lack of global context and fail to resolve ambiguities that can occur locally when an image has multiple similar regions. Although some methods have proposed to enrich the discriminative power of local descriptors with global or contextual information, these descriptors typically have higher dimension, which results in matching efficiency declined. This paper proposes a method for image local invariant features matching using local and global information, separately. Firstly, local feature points are detected and described using neighborhood information in two images. And then initial matched points are obtained by local descriptors. Next, new coordinate systems are created in the two images using each pair of initial matched feature points. In the new coordinate systems, the spatial locations of the other matched points are used to form the global feature vectors. Finally, the total relative location errors are computed for filtering out the mismatches. This method makes full use of the local and global features to avoid using only local neighborhood information to characterize the feature points. Experimental results show that the global feature vector proposed can express the global information of the feature point, and filter out the mismatches effectively. The accuracy of the matched point set is improved prominently.