Skip to Main Content
The phenomenal growth of video on the Web and the increasing sparseness of meta information associated with it forces us to look for signals from the video content for search/information retrieval and browsing based corpus exploration. A large chunk of users' searching/browsing patterns are centered around people present in the video. Doing it at scale in videos remains hard due to a) the absence of labeled data for such a large set of people and b) the large variation of pose/illumination/expression/age/occlusion/quality etc in the target corpus. We propose a system that can learn and recognize faces by combining signals from large scale weakly labeled text, image, and video corpora. First, consistency learning is proposed to create face models for popular persons. We use the text-image co-occurrence on the web as a weak signal of relevance and learn the set of consistent face models from this very large and noisy training set. Second, efficient and accurate face detection and face tracking is applied. Last, the key faces in each face track is select by clustering to get compact and robust representation. The face tracks are further clustered to get more representative key faces and remove duplicate key faces. For each cluster of face tracks, a combination of majority voting and probabilistic voting is done with the automatically learned models. The effectiveness of our framework is demonstrated by results on image and video corpora, in which we can achieve 92.68% in 37 million images and 80% top-5-precision in 1500 hours videos.