Abstract:
Existing real-time question answering models have shown speed benefits on open-domain tasks. However, they possess limited phrase representations and are susceptible to i...Show MoreMetadata
Abstract:
Existing real-time question answering models have shown speed benefits on open-domain tasks. However, they possess limited phrase representations and are susceptible to information loss, which leads to low accuracy. In this paper, we propose modified contextualized sparse and dense encoders to improve the context embedding quality. For sparse encoding, we propose the JM-Sparse, which utilizes joint multi-head attention to focus on crucial information in different context locations and subsequently learn sparse vectors within an n-gram vocabulary space. Moreover, we leverage the similarity-enhanced dense(SE-Dense) vector to obtain rich contextual dense representations. To effectively combine dense and sparse features, we train the weights of dense and sparse vectors dynamically. Extensive experiments on standard benchmarks demonstrate the effectiveness of the proposed method compared with other query-agnostic models.
Date of Conference: 10-14 July 2023
Date Added to IEEE Xplore: 25 August 2023
ISBN Information: