Skip to Main Content
This paper presents a comprehensive analysis of a variable time-scale streaming technique, VTSS, according to which rate changes are obtained by varying the inter-packet transmission interval, rather than altering, as in most cases, the source coding rate. Instead of constraining the transmitter to operate in real-time, the time scale of the packet scheduler can vary between zero, when the network is congested, to as faster than real-time as the channel bandwidth allows, when the network is lightly loaded. Although this approach is reportedly used in commercial streaming products, so far the technique has not yet been analyzed in a rigorous fashion, nor it has been compared to other state-of-the-art streaming techniques. This work first presents a theoretical analysis of the performance achievable by the VTSS approach, and it shows that, for the same channel conditions, VTSS yields a total distortion which is lower or, in the worst case, equal than the distortion of the standard real-time source-rate adaptive approach. A lower bound on receiver buffer size is also derived. Network simulations then analyze the performance of a TCP-friendly test implementation of VTSS compared with an ideal real-time source rate-adaptive technique, whose performance, being ideal, represents the upper bound of any transmission scheme based on source rate adaptation. The simulation results, also based on actual network traces, show that the VTSS approach delivers higher perceptual quality (up to 1.2 dB PSNR in the considered scenarios) and reduced video quality fluctuations (1.6 dB standard deviation PSNR, instead of 4.9 dB) for a wide range of standard video sequences. Perceptual quality evaluation by means of PVQM confirms such results. The gains, as expected, are even more pronounced (7.6 dB PSNR on average) if compared to real-time constant bit-rate video transmission.