Abstract:
Neural Machine Translation (NMT) has emerged as a dominant technique offering fluent and natural translations across various language pairs, but it demands substantial co...Show MoreMetadata
Abstract:
Neural Machine Translation (NMT) has emerged as a dominant technique offering fluent and natural translations across various language pairs, but it demands substantial computational resources and large datasets, posing challenges with rare words or phrases. This research leverages the Bhagavad Gita dataset, consisting of 700 parallel Sanskrit-English and Sanskrit-Hindi sentences and a larger dataset of 13,000 English-Sanskrit parallel sentences from the Srimad Bhagavatam. The evaluation metrics for translation quality include BLEU and METEOR scores. The Transformer model consistently outperformed the LSTM model in translating Sanskrit to English and Sanskrit to Hindi, demonstrating higher BLEU and METEOR scores. Notably, Sanskrit-Hindi translations achieved better scores than Sanskrit-English translations, indicating that the Transformer model is more effective for languages with similar syntactic structures. The LSTM model exhibited over-fitting with smaller datasets but performed adequately with larger datasets. Training curves indicated optimal performance of Transformer models within 20 epochs, whereas LSTM models required more extensive training.
Published in: 2024 International Conference on Intelligent Algorithms for Computational Intelligence Systems (IACIS)
Date of Conference: 23-24 August 2024
Date Added to IEEE Xplore: 24 October 2024
ISBN Information: