Skip to Main Content
TCP is the dominant transport protocol used in the Internet and its performance fundamentally governs the performance of Internet applications. It is well-known that packet losses can adversely affect the connection duration of TCP connections - however, what is not fully understood is how well does the TCP design deal with losses. In this paper, we systematically evaluate the impact of design parameters associated with TCP's loss detection/recovery mechanisms on the performance of real-world TCP connections. For this, we rely on an analysis tool that partially emulates the sender-side TCP implementations of 5 prominent OSes for passively analyzing the traces of TCP connections. Our study conducts passive analysis of more than 2.8 million real Internet TCP connections. We find that the recommended as well as widely-implemented settings of TCP parameters are not optimal for a significant fraction of Internet connections.