Ensemble Manifold Regularized Multi-Modal Graph Convolutional Network for Cognitive Ability Prediction | IEEE Journals & Magazine | IEEE Xplore

Ensemble Manifold Regularized Multi-Modal Graph Convolutional Network for Cognitive Ability Prediction


Abstract:

Objective: Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connec...Show More

Abstract:

Objective: Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks. Methods: To take advantage of complementary information from multi-modal fMRI, we propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating both fMRI time series and functional connectivity (FC) between each pair of brain regions. Specifically, our model learns a graph embedding from individual brain networks derived from multi-modal data. A manifold-based regularization term is enforced to consider the relationships of subjects both within and between modalities. Furthermore, we propose the gradient-weighted regression activation mapping (Grad-RAM) and the edge mask learning to interpret the model, which is then used to identify significant cognition-related biomarkers. Results: We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score. Our model obtains superior predictive performance over GCN with a single modality and other competing approaches. The identified biomarkers are cross-validated from different approaches. Conclusion and Significance: This paper develops a new interpretable graph deep learning framework for cognition prediction, with the potential to overcome the limitations of several current data-fusion models. The results demonstrate the power of MGCN in analyzing multi-modal fMRI and discovering significant biomarkers for human brain studies.
Published in: IEEE Transactions on Biomedical Engineering ( Volume: 68, Issue: 12, December 2021)
Page(s): 3564 - 3573
Date of Publication: 11 May 2021

ISSN Information:

PubMed ID: 33974537

Funding Agency:


I. Introduction

FUNCTIONAL magnetic resonance imaging (fMRI) provides a non-invasive, high-resolution technique for observing the low-frequency fluctuation in blood-oxygenation-level-dependent (BOLD) signals to characterize the metabolism of the human brain. Recent evidence [1]–[3] suggests that multiple fMRI datasets contain complementary information and can predict individual variations in behavioral and cognitive traits better than using a single dataset. Numerous data fusion methods have been developed to integrate multiple paradigms of fMRI. For instance, ICA-based approaches [4], [5] were proposed by Calhoun et al. and Sui et al. to analyze the joint information from multiple fMRI paradigms. Jie et al. [6] and Zhu et al. [7] proposed manifold regularized multi-task learning models to describe the subject-subject and the response-response relationships. These models were further extended by Xiao et al. [8], [9] to incorporate the relation information both within and between modalities. These approaches are typically based on linear models without considering complex nonlinear relationship between these data.

Contact IEEE to Subscribe

References

References is not available for this document.