Skip to Main Content
This paper describes multimodal systems for ad-hoc search constructed by IBM for the TRECVID 2003 benchmark of search systems for broadcast video. These systems all use a late fusion of independently developed speech-based and visual content-based retrieval systems and outperform our individual retrieval systems on both manual and interactive search tasks. For the manual task, our best system used a query-dependent linear weighting between speech-based and image-based retrieval systems. This system has mean average precision (MAP) performance 20% above our best unimodal system for manual search. For the interactive task, where the user has full knowledge of the query topic and the performance of the individual search systems, our best system used an interlacing approach. The user determines the (subjectively) optimal weights A and B for the speech-based and image-based systems, where the multimodal result set is aggregated by combining the top A documents from system A followed by top B documents of system B and then repeating this process until the desired result set size is achieved. This multimodal interactive search has MAP 40% above our best unimodal interactive search system.