An Extensive Study on Pretrained Models for Natural Language Processing Based on Transformers | IEEE Conference Publication | IEEE Xplore

An Extensive Study on Pretrained Models for Natural Language Processing Based on Transformers


Abstract:

In recent years, Pretraining Language Models based on Transformers -Natural Language Processing (PLMT-NLP) have been highly successful in nearly every NLP task. In the be...Show More

Abstract:

In recent years, Pretraining Language Models based on Transformers -Natural Language Processing (PLMT-NLP) have been highly successful in nearly every NLP task. In the beginning, Generative Pre-trained model-based Transformer, BERT- Bidirectional Encoder model Representations using Transformers was used to develop these models. Models constructed on transformers, Self-supervise knowledge acquiring, and transfer learning establish the foundation of these designs. Transformed-based pre-trained models acquire common linguistic illustrations from vast amounts of textual information through self-supervised model and apply this information to downstream tasks. To eliminate the need to retrain downstream models, these models provide a solid foundation of knowledge. In this paper, the enhanced learning on PLMT-NLP has been discussed. Initially, a quick introduction to self-supervised learning is presented, then diverse core concepts used in PLMT-NLP are explained. Furthermore, a list of relevant libraries for working with PLMT-NLP has been provided. Lastly, the paper discusses about the upcoming research directions that will further improve these models. Because of its thoroughness and relevance to current PLMT-NLP developments, this survey study will positively serve as a valuable resource for those seeking to understand both basic ideas and new developments better.
Date of Conference: 16-18 March 2022
Date Added to IEEE Xplore: 13 April 2022
ISBN Information:
Conference Location: Tuticorin, India

Contact IEEE to Subscribe

References

References is not available for this document.