Abstract:
Modeling visual question answering (VQA) through scene graphs can significantly improve the reasoning accuracy and interpretability. However, existing models answer poorl...Show MoreMetadata
Abstract:
Modeling visual question answering (VQA) through scene graphs can significantly improve the reasoning accuracy and interpretability. However, existing models answer poorly for complex reasoning questions with attributes or relations, which causes false attribute selection or missing relation in Figure 1(a). It is because these models cannot balance all kinds of information in scene graphs, neglecting relation and attribute information. In this paper, we introduce a novel Dual Message-passing enhanced Graph Neural Net-work (DM-GNN), which can obtain a balanced represen-tation by properly encoding multi-scale scene graph infor-mation. Specifically, we (i) transform the scene graph into two graphs with diversified focuses on objects and relations; Then we design a dual structure to encode them, which in-creases the weights from relations (ii) fuse the encoder out-put with attribute features, which increases the weights from attributes; (iii) propose a message-passing mechanism to en-hance the information transfer between objects, relations and attributes. We conduct extensive experiments on datasets in-cluding GQA, VG, motif-VG and achieve new state of the art.
Date of Conference: 18-22 July 2022
Date Added to IEEE Xplore: 26 August 2022
ISBN Information: