Combining multimodal and temporal contextual information for semantic video analysis | IEEE Conference Publication | IEEE Xplore

Combining multimodal and temporal contextual information for semantic video analysis


Abstract:

In this paper, a graphical modeling-based approach to semantic video analysis is presented for jointly realizing modality fusion and temporal context exploitation. Overal...Show More

Abstract:

In this paper, a graphical modeling-based approach to semantic video analysis is presented for jointly realizing modality fusion and temporal context exploitation. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for every modality. Subsequently, an integrated Bayesian Network (BN) is introduced for simultaneously performing information fusion and temporal contextual knowledge exploitation, contrary to the usual practice of performing each task separately. The final outcome of the overall video analysis approach is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach in the domain of news broadcast video are presented.
Date of Conference: 07-10 November 2009
Date Added to IEEE Xplore: 17 February 2010
ISBN Information:

ISSN Information:

Conference Location: Cairo, Egypt

Contact IEEE to Subscribe

References

References is not available for this document.