Abstract:
The aim of this publication is to compare the accuracy of the Bidirectional Encoder Representations from Transformers (BERT) and Generalized Autoregressive Pretraining fo...Show MoreMetadata
Abstract:
The aim of this publication is to compare the accuracy of the Bidirectional Encoder Representations from Transformers (BERT) and Generalized Autoregressive Pretraining for Language Understanding (XLNet) models in text classification with the accuracy of classical machine learning methods and algorithms. Analyzed: Bidirectional Encoder Representations from Transformers (BERT), Generalized Autoregressive Pretraining for Language Understanding (XLNet), Bernoulli Naive Bayes classifier, Gaussian Naive Bayes classifier, Multinomial Naive Bayes classifier, Support Vector Machines. The results show that when classifying 50,000 reviews in English, XLNet ranks with the highest accuracy - 96%, which is nearly 8% more than the best-performing classic classifier Support Vector Machines.
Date of Conference: 02-04 June 2022
Date Added to IEEE Xplore: 20 July 2022
ISBN Information: