Skip to Main Content
Most state-of-the-art speaker recognition systems do improve their performance when utilizing glottal information. Although they successfully model its changes as features for recognition task, they do not take into account the spectral variations caused by it. A method that can lessen this influence, using both long-term and short-term glottal information, is proposed. Spectral features behave more discriminative in text-independent automatic speaker recognition (ASR) through this recuperation. Our method was applied to YOHO corpus and our SRMC corpus. The experimental works show promising results.