Loading web-font TeX/Math/Italic
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation | IEEE Journals & Magazine | IEEE Xplore

E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation


Abstract:

Sequence-to-sequence (seq2seq) learning is a popular fashion for large-scale pretraining language models. However, the previous seq2seq pretraining models generally focus...Show More

Abstract:

Sequence-to-sequence (seq2seq) learning is a popular fashion for large-scale pretraining language models. However, the previous seq2seq pretraining models generally focus on reconstructive objectives on the decoder side and neglect the effect of encoder-side supervision, which we argue may lead to sub-optimal performance. To verify our hypothesis, we first empirically study the functionalities of the encoder and decoder in seq2seq pretrained language models, and find that the encoder takes an important but under-exploitation role than the decoder regarding the downstream performance and neuron activation. Therefore, we propose an encoding-enhanced seq2seq pretraining strategy, namely E2S2, which improves the seq2seq models via integrating more efficient self-supervised information into the encoders. Specifically, E2S2 adopts two self-supervised objectives on the encoder side from two aspects: 1) locally denoising the corrupted sentence (denoising objective); and 2) globally learning better sentence representations (contrastive objective). With the help of both objectives, the encoder can effectively distinguish the noise tokens and capture high-level (i.e., syntactic and semantic) knowledge, thus strengthening the ability of seq2seq model to accurately achieve the conditional generation. On a large diversity of downstream natural language understanding and generation tasks, E2S2 dominantly improves the performance of its powerful backbone models, e.g., BART and T5. For example, upon BART backbone, we achieve +1.1% averaged gain on the general language understanding evaluation (GLUE) benchmark and +1.75% F_{0.5} score improvement on CoNLL2014 dataset. We also provide in-depth analyses to show the improvement stems from better linguistic representation. We hope that our work will foster future self-supervision research on seq2seq language model pretraining.
Published in: IEEE Transactions on Knowledge and Data Engineering ( Volume: 36, Issue: 12, December 2024)
Page(s): 8037 - 8050
Date of Publication: 18 December 2023

ISSN Information:

Funding Agency:


I. Introduction

Sequence-to-sequence (seq2seq) pretrained language models (PLMs) [1], [2], [3], [4], [5] are widely used in the community of natural language processing and have achieved remarkable success in numerous downstream tasks of both natural language generation (NLG) and understanding (NLU), such as machine translation [2], [6], [7], text summarization [5], [8], grammatical error correction [9] and other discriminative tasks [3], [10], [11]. Specifically, seq2seq models are generally implemented with an encoder-decoder framework [12], where the encoder models the input sentence first and then the decoder generates the output tokens auto-regressively conditioned on the representation of encoder.

Contact IEEE to Subscribe

References

References is not available for this document.