Skip to Main Content
In this work, the MELP (mixed excitation linear prediction) speech coding algorithm has been used for speech conversion. Speech conversion aims to modify the speech of one speaker such that the modified speech sounds as if spoken by another speaker. Speech modeling of MELP has been used to derive a mapping the between the speech models of the two speakers. We have obtained a mapping which provides a context-free speech conversion. We have mainly considered the spectral properties of the speakers. Using the 230 sentences of the two speakers, a mapping between the 4-stage vector quantization indexes for line spectral frequencies (LSF) of the two speakers have been obtained. Two different methods have been proposed to obtain a codebook for the second speaker from this mapping and both have been applied in addition to pitch modification during synthesis. The first method replaces the LSF index of the first speaker with that of the second speaker, which appears the most, during training. The second method uses the weighted average from the histogram of the second speaker that corresponds to the index of the first speaker, to form a new LSF codebook for the second speaker. Subjective ABX listening tests have been carried out and the correct speaker perception rate has been obtained as 70% and 65% for the first and the second spectral conversion methods respectively.