Skip to Main Content
Modern interconnects and corresponding high performance MPIs have been feeding the surge in the popularity of compute clusters and computing applications. Recently with the introduction of the iWARP (Internet wide area RDMA protocol) standard, RDMA and zero-copy data transfer capabilities have been introduced and standardized for Ethernet networks. While traditional Ethernet networks had largely been limited to the traditional kernel based TCP/IP stacks and hence their limitations, iWARP capabilities of the newer GigE and 10 GigE adapters have broken this barrier and thereby exposing the available potential performance. In order to enable applications to harness the performance benefits of iWARP and to study the quantitative extent of such improvements, we present MPI- iWARP, a high performance MPI implementation over the open fabrics verbs. Our preliminary results with Chelsio T3B adapters show an improvement of up to 37% in bandwidth, 75% in latency and 80% in MPI all reduce as compared to MPICH2 over TCP/IP. To the best of our knowledge, this is the first design, implementation and evaluation of a high performance MPI over the iWARP standard.