Skip to Main Content
Packet delay variation (or delay jitter) measurements are used by applications to estimate the service quality received from the network, or by network operators to monitor network operation states. Since a single jitter measurement takes two delay samples to calculate, the time scale over which the two delay samples are taken may affect the statistics of measured jitter. The current common practice of calculating jitter statistics is by treating all measurements as valid samples of the same sampling space. In this paper, we perform scaling analyses on measured delay sequences to show that the proper way of conducting jitter statistic analysis is by first grouping jitter samples into different clusters each containing samples that are taken over the same or similar time scales, and then carrying out statistic analysis separately on these clusters. This special treatment is desired due to the existence of strong short-range dependency among packet delays, which is introduced by queueing effect. The tool selected to perform the scaling analysis is called Deviation-Lag Function (DLF). We show that some congestion-related information of congested end-to-end paths can be derived from their DLF plots. We also discuss the potential usage of DLF for bottleneck queue detection.