Skip to Main Content
The sockets direct protocol (SDP) is a stream transport protocol, which is capable of supporting kernel bypass data transfers as well as zero-copy data transfers. It was developed for new networking technologies, which support user-level networking and remote direct memory access (RDMA). This paper studies the performance of an SDP implementation over 4X InfiniBand. SDP performance is studied for two APIs: the regular sockets API without zero-copy transfers, and the asynchronous I/O (AIO) API that supports zero-copy transfers and multiple outstanding transfers. Tests were run to measure latency, throughput, and CPU load. One goal was to determine the message size threshold where it starts being beneficial to use SDP with the AIO API instead of the regular sockets API. It is shown that the optimal threshold is different depending on whether the goal is to maximize throughput alone or throughput per unit of CPU load. SDP performance is also compared to InfiniBand verbs performance and to TCP performance over Gigabit Ethernet. It is shown that SDP is capable of low latencies (31 μs for small messages) and very high throughput at low CPU loads (close to 6 Gbs with 64 KB buffers at under 30% CPU load).