Skip to Main Content
Computational simulations generate vast amounts of data that require effective storage, and retrieval technologies. Traditional file access interfaces rely on ubiquitous transports that impose severe restrictions on performance and prove insufficient for adaptation to parallel input/output (I/O). Remote direct memory access based (RDMA-based) approaches are aimed at moving data between different process address spaces with streamlined mediation and reduced involvement of the operating system using synchronization semantics that are different from ubiquitous transports. While currently available RDMA-based file systems are not designed for parallel I/O, very few parallel file systems that are available today, benefit from the capabilities of RDMA completely. This paper analyzes the adaptability of RDMA-based transports to parallel I/O, by using a commercial grade implementation of MPI, coupled with a complete, stable DAFS implementation. Combining RDMA semantics with parallel I/O that provide overlapping communication and computation and bandwidth enhancement leads to overhead reduction, allowing more CPU usage by applications. This paper also shows that significant parallel I/O performance can be achieved by intelligent middleware, without any specific requirement on the filesystem.