Loading [a11y]/accessibility-menu.js
Enhancing Whisper’s Accuracy and Speed for Indian Languages through Prompt-Tuning and Tokenization | IEEE Conference Publication | IEEE Xplore

Enhancing Whisper’s Accuracy and Speed for Indian Languages through Prompt-Tuning and Tokenization


Abstract:

Automatic speech recognition has recently seen a significant advancement with large foundational models such as Whisper. However, these models often struggle to perform w...Show More

Abstract:

Automatic speech recognition has recently seen a significant advancement with large foundational models such as Whisper. However, these models often struggle to perform well in low-resource languages, such as Indian languages. This paper explores two novel approaches to enhance Whisper’s multilingual speech recognition performance in Indian languages. First, we propose prompt-tuning with language family information, which enhances Whisper’s accuracy in linguistically similar languages. Second, we introduce a novel tokenizer that reduces the number of generated tokens, thereby accelerating Whisper’s inference speed. Our extensive experiments demonstrate that the tokenizer significantly reduces inference time, while prompt-tuning enhances accuracy across various Whisper model sizes, including Small, Medium, and Large. Together, these techniques achieve a balance between optimal WER and inference speed.
Date of Conference: 06-11 April 2025
Date Added to IEEE Xplore: 07 March 2025
ISBN Information:

ISSN Information:

Conference Location: Hyderabad, India

Contact IEEE to Subscribe

References

References is not available for this document.