Skip to Main Content
We explored the TCP-incast throughput collapse problem in data center networks from an application-layer perspective. In particular, we presented the model and analyzed the performance of the application-based approach under TCP-incast scenario. The main idea of the approach is to schedule the server responses to data requests so that no packet losses occur at the bottleneck link. The main result we derive is the achievable goodput of a data center application if using lossless scheduling. The simulations confirmed the validity of our model and the results derived through the theoretical analysis. Our future work is to explore practical scenarios of our approach and implement it in a real data center network.