Skip to Main Content
Due to the intrinsic long-tailed distribution of objects in the real world, we are unlikely to be able to train an object recognizer/detector with many visual examples for each category. We have to share visual knowledge between object categories to enable learning with few or no training examples. In this paper, we show that local object similarity information--statements that pairs of categories are similar or dissimilar--is a very useful cue to tie different categories to each other for effective knowledge transfer. The key insight: Given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. To exploit this category-dependent similarity regularization, we develop a regularized kernel machine algorithm to train kernel classifiers for categories with few or no training examples. We also adapt the state-of-the-art object detector to encode object similarity constraints. Our experiments on hundreds of categories from the Labelme dataset show that our regularized kernel classifiers can make significant improvement on object categorization. We also evaluate the improved object detector on the PASCAL VOC 2007 benchmark dataset.