Loading [MathJax]/extensions/MathMenu.js
Multi-modal fusion for video understanding | IEEE Conference Publication | IEEE Xplore

Multi-modal fusion for video understanding


Abstract:

The exploitation of semantic information in computer vision problems can be difficult because of the large difference in representations and levels of knowledge. Image an...Show More

Abstract:

The exploitation of semantic information in computer vision problems can be difficult because of the large difference in representations and levels of knowledge. Image analysis is formulated in terms of low-level features describing image structure and intensity, while high-level knowledge such as purpose and common sense are encoded in abstract, non-geometric representations. In this work we attempt to bridge this gap through the integration of image analysis algorithms with WordNet, a large semantic network that explicitly links related words in a hierarchical structure. Our problem domain is the understanding of broadcast news, as this provides both linguistic information in the transcript and video information. Visual detection algorithms such as face detection and object tracking are applied to the video to extract basic object information, which is indexed into WordNet. The transcript provides topic information in the form of detected keywords. Together, both types of information are used to constrain a search within WordNet for a description of the video content in terms of the most likely WordNet concepts. This project is in its early stages; the general ideas and concepts are presented here.
Date of Conference: 10-12 October 2001
Date Added to IEEE Xplore: 06 August 2002
Print ISBN:0-7695-1245-3
Conference Location: Washington, DC, USA

Contact IEEE to Subscribe

References

References is not available for this document.