Skip to Main Content
To combat jitter problems in voice streaming over packet networks, playout buffering algorithms are used at the receiver side. Most of the proposed solutions rely on two main operations: prediction of delay statistics for future packets; setting of the end-to-end delay so as to limit or avoid packet losses. In recent years, a new approach has been presented, which is based on using a quality model to evaluate the impact of both packet loss and delay on the voice quality. Such a model is used to find the buffer setting that maximizes the expected quality. In this paper, we present a playout buffering algorithm whose main contribution is the extension of the new quality-based approach to the case of voice communications affected by bursty packet losses. This work is motivated by two main considerations: most of IP telephony applications are characterized by bursty losses instead of random ones; the human perception of the speech quality is significantly affected by the temporal correlation of losses. To this purpose, we make use of the extensions proposed in the ETSI Tiphon for the ITU-T E-Model so as to incorporate the effects of loss burstiness on the perceived quality. The resulting playout algorithm estimates the characteristics of the loss process varying the end-to-end delay, weights the loss and the delay effects on the perceived quality, and maximizes the overall quality to find the optimal setting for the playout buffer. The experimental results prove the effectiveness of the proposed technique.