Cart (Loading....) | Create Account
Close category search window
 

Bridging the Semantic Gap via Functional Brain Imaging

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Xintao Hu ; Sch. of Autom., Northwestern Polytech. Univ., Xi''an, China ; Kaiming Li ; Junwei Han ; Xiansheng Hua
more authors

The multimedia content analysis community has made significant efforts to bridge the gaps between low-level features and high-level semantics perceived by humans. Recent advances in brain imaging and neuroscience in exploring the human brain's responses during multimedia comprehension demonstrated the possibility of leveraging cognitive neuroscience knowledge to bridge the semantic gaps. This paper presents our initial effort in this direction by using functional magnetic resonance imaging (fMRI). Specifically, task-based fMRI (T-fMRI) was performed to accurately localize the brain regions involved in video comprehension. Then, natural stimulus fMRI (N-fMRI) data were acquired when subjects watched the multimedia clips selected from the TRECVID datasets. The responses in the localized brain regions were measured and used to extract high-level features as the representation of the brain's comprehension of semantics in the videos. A novel computational framework was developed to learn the most relevant low-level feature sets that best correlate the fMRI-derived semantic features based on the training videos with fMRI scans, and then the learned model was applied to larger scale TRECVID video datasets without fMRI scans for category classification. Our experimental results demonstrate: 1) there are meaningful couplings between brain's fMRI-derived responses and video stimuli, suggesting the validity of linking semantics and low-level features via fMRI and 2) the computationally learned low-level features can significantly (p <; 0.01) improve video classification in comparison with original low-level features and extracted low-level features resulted from well-known feature projection algorithms.

Published in:

Multimedia, IEEE Transactions on  (Volume:14 ,  Issue: 2 )

Date of Publication:

April 2012

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.