Skip to Main Content
The past few years have seen researchers debate the size of buffers required at core Internet routers. Much of this debate has focused on TCP throughput, and recent arguments supported by theory and experimentation suggest that few tens of packets of buffering suffice at bottleneck routers for TCP traffic to realise acceptable link utilisation. This paper introduces a small fraction of real-time (i.e. open-loop) traffic into the mix, and discovers an anomalous behaviour: In this specific regime of very small buffers, losses for real-time traffic do not fall monotonically with buffer size, but instead exhibit a region where larger buffers cause higher losses. Our contributions pertaining to this phenomenon are threefold: First, we demonstrate this anomalous loss performance for real-time traffic via extensive simulations including real video traces. Second, we provide qualitative explanations for the anomaly and develop a simple analytical model that reveals the dynamics of buffer sharing between TCP and real-time traffic leading to this behaviour. Third, we show how various factors such as traffic characteristics and link rates impact the severity of this anomaly. Our study particularly informs all-optical packet router designs (envisaged to have buffer sizes in the few tens of packets) and network service providers who operate their buffer sizes in this regime, of the negative impact investment in larger buffers can have on the quality of service performance.